We've all experienced that warm fuzzy feeling that comes from taking a O(lg n) algorithm and replacing it with an elegant, 5-line implementation of a O(n!) algorithm....

Or maybe not.

Truthfully, I think we all strive so much for efficiency and good practices that we could have a lot of fun with this idea and perhaps get some benefit in the process.

Get the bad algorithms out of your system! Seize the bull by the horns!

Or if seizing the bull by the horns seems like too straightforward of an algorithm, then try throwing peanuts in the bull's face using a teaspoon until he stomps you into the dust and has a heart attack from overexertion. Maybe that's still too efficient, though?

Rube Goldberg approaches are welcome, but what I really want to shoot for here is very simple algorithms which are astonishingly, elegantly inefficient.

Stupid sort: Start at the beginning of the list and move forward. Compare each item to the previous item. If an item is smaller than the previous one, move it to the very beginning of the list and restart. When you get all the way through the list, it will be sorted smallest to biggest.

bogosort: while the list is not sorted shuffle the list return list

Guaranteed to terminate successfully with P = 1, but it's impossible to derive an O notation since the running time has no upper bound.

A more practical example, I once replaced a sequential O(n*k) algorithm (for a small fixed k) with an OpenCL kernel that ended up being O(n*n). Initial testing proved the GPU to be faster. But as n grew larger than the amount of shaders on our GPU, performance became slightly sluggish. Oops.

Now it's a sequential O(n) step on the CPU with some O(n*k) postprocessing on the GPU and everything is fine.

factorial = product . enumFromTo 1 isPrime n = factorial (n - 1) `mod` n == n - 1

Wow. Perfect example. It's not merely inefficient, it's actually elegantly bad. It took me several seconds to think it through and check that it's correct.

Hmmm, I'm not getting the syntax. What language is that? Can you write it in Java or C?

Edit: Wait, maybe I get it. I'm not sure how to do linked lists in Java, though—I'm still learning. So would the following pseudocode capture your algorithm?

sort(list) if the list is empty return the empty list else set a = first item of the list set as = the rest of the list set as' = a followed by sort(as) if as' is sorted then return as' else return sort(as followed by a)

If so, I think you're right, that's O(n!). Nicely done

def slowsort(A, i, j): # This procedure sorts the subarray A[i]...A[j] if i >= j: return m = (i+j)/2 slowsort(A, i, m) slowsort(A, m+1, j) if A[m] > A[j]: A[m],A[j] = A[j],A[m] slowsort(A, i, j-1)

It has an alledged complexity of O(n^{log n})=O(e^{(log n)^2}), which is the weirdest time complexity I've seen so far. Today I learned: that's called quasi-polynomial time.

[edit] Forgot the code tag

Last edited by Flumble on Wed Jun 03, 2015 10:51 pm UTC, edited 1 time in total.

Wildcard wrote:Hmmm, I'm not getting the syntax. What language is that? Can you write it in Java or C?

Edit: Wait, maybe I get it. I'm not sure how to do linked lists in Java, though—I'm still learning. So would the following pseudocode capture your algorithm?

sort(list) if the list is empty return the empty list else set a = first item of the list set as = the rest of the list set as' = a followed by sort(as) if as' is sorted then return as' else return sort(as followed by a)

If so, I think you're right, that's O(n!). Nicely done

Yes, your psuedocode correctly translates benneh's code, which was written in Haskell (as was hotaru's factorial-based primality tester).

Xenomortis wrote:O(n^{2}) takes on new meaning when trying to find pairs of socks in the morning.

Wildcard wrote:Hmmm, I'm not getting the syntax. What language is that? Can you write it in Java or C?

Edit: Wait, maybe I get it. I'm not sure how to do linked lists in Java, though—I'm still learning. So would the following pseudocode capture your algorithm?

sort(list) if the list is empty return the empty list else set a = first item of the list set as = the rest of the list set as' = a followed by sort(as) if as' is sorted then return as' else return sort(as followed by a)

If so, I think you're right, that's O(n!). Nicely done

I believe it's basically the same as your stupid sort, but it works back to front instead of front to back, and uses an inefficient O(n) swap. Therefore, based on your analysis in the OP, I believe it is O(n*2^n)

def slowsort(A, i, j): # This procedure sorts the subarray A[i]...A[j] if i >= j: return m = (i+j)/2 slowsort(A, i, m) slowsort(A, m+1, j) if A[m] > A[j]: A[m],A[j] = A[j],A[m] slowsort(A, i, j-1)

class quickperm: #Based on the QuickPerm algorithm from http://www.quickperm.org/ def __init__(self, size): self.permutation = [x for x in range(size)] self._p = [x for x in range(size+1)] self._i = 1

def is_sorted(a): for i in range(1, len(a)): if a[i] < a[i-1]: return False; return True

def permsort(a): perm = quickperm(len(a)) b = [a[x] for x in perm.permutation] while not is_sorted(b): perm.next_permutation() b = [a[x] for x in perm.permutation] return b

O(n!) in the worst case.

Behold your only true messiah. An entity of which you're a part. A vast and cold indifferent being. A grey clad mass without a heart.

Wildcard wrote:Hmmm, I'm not getting the syntax. What language is that? Can you write it in Java or C?

Edit: Wait, maybe I get it. I'm not sure how to do linked lists in Java, though—I'm still learning. So would the following pseudocode capture your algorithm?

sort(list) if the list is empty return the empty list else set a = first item of the list set as = the rest of the list set as' = a followed by sort(as) if as' is sorted then return as' else return sort(as followed by a)

If so, I think you're right, that's O(n!). Nicely done

I believe it's basically the same as your stupid sort, but it works back to front instead of front to back, and uses an inefficient O(n) swap. Therefore, based on your analysis in the OP, I believe it is O(n*2^n)

I thought the same thing at first, but the "as" in the last line is not sorted.

EDIT: Ninja'd. Nice to see this topic so active. I'll post what I wrote anyways. As an additional note, I have a good intuitive grasp on this stuff but I'm actually only now working through Cormen from the very beginning. I haven't done any formal analysis of algorithm running times yet, so the following may look a little will probably look very peculiar to you.

Derek wrote:I believe it is O(n*2^n)

Try sorting a reverse sorted list with stupider-sort.

4321 321 21 1 [] return [] 1 is sorted = true return 1 21 is sorted = false 12 2 [] return [] 2 is sorted = true return 2 12 is sorted = true return 12 312 is sorted = false 213 13 3 [] return [] 3 is sorted = true return 3 13 is sorted = true return 13 213 is sorted = false 132 32 2 [] return [] 2 is sorted = true return 2 32 is sorted = false 23 3 [] return [] 3 is sorted = true return 3 23 is sorted = true return 23 123 is sorted = true return 123 4123 is sorted = false 3214 214 14 4 [] return [] 4 is sorted = true return 4 14 is sorted = true return 14 214 is sorted = false 142 42 2 [] return [] 2 is sorted = true return 2 42 is sorted = false 24 4 [] return [] 4 is sorted = true return 4 24 is sorted = true return 24 124 is sorted = true return 124 3124 is sorted = false 2143 143 43 3 [] return [] 3 is sorted = true return 3 43 is sorted = false 34 4 [] return [] 4 is sorted = true return 4 34 is sorted = true return 34 134 is sorted = true return 134 2134 is sorted = false 1432 432 32 2 [] return [] 2 is sorted = true return 2 32 is sorted = false 23 3 [] return [] 3 is sorted = true return 3 23 is sorted = true return 23 423 is sorted = false 324 24 4 [] return [] 4 is sorted = true return 4 24 is sorted = true return 24 324 is sorted = false 243 43 3 [] return [] 3 is sorted = true return 3 43 is sorted = false 34 4 [] return [] 4 is sorted = true return 4 34 is sorted = true return 34 234 is sorted = true return 234 1234 is sorted = true return 1234

I've shown each recursive call in its own indentation level, unless the recursive call is in a return statement (i.e. return sort(as followed by a)).

If you're sorting 54321, after you sort 4321 and check if 51234 is sorted, you then throw away all that work by just rotating the list to 43215. Then after you sort 3215 and check if 41235 is sorted, you just rotate the original list again, to 32154, and so on.

Simply put, you have 5 top-level recursive calls to sort a 4 item list before your 5 item list is sorted (worst case). You have 17 calls to sort a 16 item list before your 17 item list is sorted. And so on. Doesn't that make it O(n!)?

I can see that not all the recursive calls will be worst case running time themselves, because due to the rotation based iteration, the worst case input must have the smallest item at the end of the list. And after the first top level recursive call completes, the smallest item will become the 2nd to last item in the list and will never move backwards in the list again.

...Hmmm. Actually, I think that 23451 is the worst case running time. Or is it? Not sure.

def is_permutation(a,b): for i in a: counta = 0 countb = 0 for j in a: if i == j: counta += 1 for j in b: if i == j: countb += 1 if counta != countb: return False return True

def next_sorted_array(a, bound): i = len(a)-1 a[i] += 1

while a[i] > bound and i > 0: i -= 1 a[i] += 1

if a[i] > bound: return False

while i<len(a)-1: a[i+1] = a[i] i += 1

return True

def reallydumbsort(a): lbound = min(a) ubound = max(a) s = [lbound for x in range(len(a))] while not is_permutation(a,s): next_sorted_array(s,ubound) return s

Sorting the array [0,100,100,100,100] takes 4,598,125 iterations.

Behold your only true messiah. An entity of which you're a part. A vast and cold indifferent being. A grey clad mass without a heart.

def sort(tosort): for candidate in itertools.product(range(min(tosort), max(tosort) + 1), repeat=len(tosort)): if any(list(tosort) == list(i) for i in itertools.permutations(candidate)): return list(candidate) else: raise ValueError("Could not find sorted list")

Spoiler:

Intentionally looping through the permutations of candidate to see if tosort is one, instead of vice-versa, to avoid "well, if we're looping through the permutations if tosort anyway...".

By my eye, this is something like O(2^{kn}n!n) to sort a list of n ints where each int is k bits long (ie max(a)-min(a) is O(2^{k})).

[edit]

Wildcard wrote:...Hmmm. Actually, I think that 23451 is the worst case running time. Or is it? Not sure.

def issorted(a): return all(i<j for i,j in zip(a[:-1], a[1:])) def stupidersort(a): if not a: return [] while True: first, rest = a[0:1], a[1:] candidate = first + stupidersort(rest) if issorted(candidate): return candidate a = rest + first

import timeit, itertools, pprint results = [] timer = timeit.Timer("stupidersort(i)", "from __main__ import stupidersort, i") for i in itertools.permutations([1,2,3,4,5]): i = list(i) results.append((i, min(timer.repeat(number=100000)))) results.sort(key=lambda x:x[1]) # yes, using the "real" sort here... so sue me pprint.pprint(results)

def issorted(a): return all(i<j for i,j in zip(a[:-1], a[1:])) def stupidersort(a): if not a: return [] while True: global count count += 1 first, rest = a[0:1], a[1:] candidate = first + stupidersort(rest) if issorted(candidate): return candidate a = rest + first

import itertools, pprint results = [] for i in itertools.permutations([1,2,3,4,5]): count = 0 stupidersort(list(i)) results.append((i, count)) results.sort(key=lambda x:x[1]) pprint.pprint(results)

I really should learn more of the Python library functions, but yeah, that does make a lot more sense; a lot cleaner than my function to check if it's a permutation.

Behold your only true messiah. An entity of which you're a part. A vast and cold indifferent being. A grey clad mass without a heart.

stupiderSort list: for perm in permutations(list): if perm is sorted: return perm

I was originally going for something O(n^n), by doing something similar to phlip. After seeing phlip's solution, I've come up with this, which I'm fairly sure is O(n^n) (phlip, are you sure yours isn't at least as bad as this?):

stupidestSort list: for candidate in list^(length list): if candidate is sorted and candidate is a permutation of list: return candidate

"list^n" is the collection of all lists of length n consisting of elements of 'list'.

If you want to go really over the top, you could provide a method of determining whether a list is sorted which then recalls your list sorting algorithm:

listPower :: Int -> [a] -> [[a]] listPower n as = foldr (liftA2 (:)) [[]] (replicate n as)

isSorted :: (Ord a) => [a] -> Bool isSorted [] = True isSorted [_] = True isSorted (a:as) = a <= minimum as && isSorted as

minimum :: (Ord a) => [a] -> a minimum = head . sort

find' :: (a -> Bool) -> [a] -> a find' condition = head . filter condition

This is essentially the same as stupidestSort, but with a small modification. In order to check that a list is sorted, it checks that the first element is less than or equal to the minimum of the rest of the list, then checks that the rest of the list is sorted. But to find the minimum of a list, it sorts the list (!) and returns the first element. I have no idea how to work out the running time of that monstrosity. It can sort a 5 element list in less than a second, but I started sorting a 6 element list when I started writing this post, and it hasn't finished yet.

EDIT: For a maximum balance of simplicity and inefficiency:

sort(list): if length(list) < 2: return list else: if head(list) <= minimum(tail(list)): return head(list) followed by sort(tail(list)) else: return sort(tail(list) followed by head(list))

minimum(list): return head(sort(list))

EDIT: The 6 element list took just over 3 hours to sort.

benneh wrote:O(n^n) (phlip, are you sure yours isn't at least as bad as this?)

Well, in mine the "n"s aren't quite measuring the same thing... so it's O(k'^n) loops of the main loop where k' is the range of the list (max minus min) and n is the length. Except complexities are usually given in terms of the length, not just the number of values, so k'=2^{k} (ie max minus min is k bits long), and the outer loop runs O(2^{kn}) times. Inside that it loops through every permutation of a list of length n, O(n!), and inside that it checks if two lists of length n are equal, O(n). Hence the full O(2^{kn}n!n) I came up with.

phlip wrote:Well, in mine the "n"s aren't quite measuring the same thing... so it's O(k'^n) loops of the main loop where k' is the range of the list (max minus min) and n is the length. Except complexities are usually given in terms of the length, not just the number of values, so k'=2^{k} (ie max minus min is k bits long), and the outer loop runs O(2^{kn}) times. Inside that it loops through every permutation of a list of length n, O(n!), and inside that it checks if two lists of length n are equal, O(n). Hence the full O(2^{kn}n!n) I came up with.

Ah, I see. So if the length of the list is significantly larger than the range, duplicates become inevitable, which don't inconvenience your algorithm as much, and your algorithm performs better than mine; whereas if the range is significantly larger than the length of your list, your algorithm produces far more unnecessary 'permutations' to check, and my algorithm performs better than yours.

In a similar vein, a number is prime if it has no non-trivial prime factors, right?

Then again, real-life Haskell will optimize the shit out of that line. The only inefficiency left is that it checks for factors larger than √n.

Thank you . My tests indicate that GHC doesn't optimise it much at all, though. I wrote a quick program to print out a list of prime numbers using that algorithm; after 2 hours, it's got up to 37.

benneh wrote:My tests indicate that GHC doesn't optimise it much at all, though. I wrote a quick program to print out a list of prime numbers using that algorithm; after 2 hours, it's got up to 37.

I was promised (maybe not exactly promised) by my professor that GHC would cache all the intermediate results of isPrime and short-cut the list-building. Although, it appears that GHCi doesn't know that a list, once non-empty, can never reach a length of 0. (not.).any /= (((==0).length).).filter in manner of execution.

benneh wrote:My tests indicate that GHC doesn't optimise it much at all, though. I wrote a quick program to print out a list of prime numbers using that algorithm; after 2 hours, it's got up to 37.

I was promised (maybe not exactly promised) by my professor that GHC would cache all the intermediate results of isPrime and short-cut the list-building. Although, it appears that GHCi doesn't know that a list, once non-empty, can never reach a length of 0. (not.).any /= (((==0).length).).filter in manner of execution.

Memoization is not free. It can result in significant speedups, but can also consume significant memory if applied naively. I sort of doubt that any compiler could decide automatically whether to memoize or not in all situations.

I was doing an Euler problem once, one of the harder ones, and wrote some code to memoize the results of a function. It very quickly consumed all the memory on my system. I had to go back and break up the function so that it had one part that could be effectively memoized, and the rest was not.

It's kinda the "fix" function lifted over a list of functions... a more readable version (for people that don't Haskell) could be written like how "fix" is written:

That is, fix calls a function with the parameter being the return value from the same function (thanks to lazy evaluation)... essentially finding a fixed point of the function. So, for instance:

One thing that has always bugged me about Haskell is the community's fetish for point free function definitions, resulting in the most unreadable mess of symbols since APL. This,

Is perfectly readable to anyone who has seen list comprehensions before (and can probably be understood by many people who haven't), and didn't even need to use helper functions that the reader also needs to understand.

Derek wrote:One thing that has always bugged me about Haskell is the community's fetish for point free function definitions, resulting in the most unreadable mess of symbols since APL. This,

Is perfectly readable to anyone who has seen list comprehensions before (and can probably be understood by many people who haven't), and didn't even need to use helper functions that the reader also needs to understand.

Derek wrote:One thing that has always bugged me about Haskell is the community's fetish for point free function definitions, resulting in the most unreadable mess of symbols since APL. This,

Is perfectly readable to anyone who has seen list comprehensions before (and can probably be understood by many people who haven't), and didn't even need to use helper functions that the reader also needs to understand.

As long as you know what the ($x) means, it's pretty understandable. Otherwise, I get your point: I have no idea what on earth fmap . flip id =<< is supposed to do.

That's because I was explicitly trying to write something that would be readable to someone unfamiliar with Haskell, and a list comprehension fills that role better than map. Agreed that "fmap ($x)" is better if you have a passing familiarity for the language.

DR6 wrote:Otherwise, I get your point: I have no idea what on earth fmap . flip id =<< is supposed to do.

You can get a bit of a head-start by recognising that ($) is equivalent to "id" restricted to functions. So the ($x) in your version is equivalent to "flip id x".

From there you some manipulations to make it point-free, so it can use "fix" rather than explicit self reference, and that's where you end up.

benneh wrote:I have no idea how to work out the running time of that monstrosity. It can sort a 5 element list in less than a second, but I started sorting a 6 element list when I started writing this post, and it hasn't finished yet.

EDIT: For a maximum balance of simplicity and inefficiency:

sort(list): if length(list) < 2: return list else: if head(list) <= minimum(tail(list)): return head(list) followed by sort(tail(list)) else: return sort(tail(list) followed by head(list))

minimum(list): return head(sort(list))

EDIT: The 6 element list took just over 3 hours to sort.

Let S(n,m) be the cost where the first element is m away from the right place. S(n) <= S(n, n-1)

S(1) = 1 S(n,0) <= 1 + 2S(n-1) (the case where head(list)<=min(tail(list))) S(n,k) <= 1 + S(n-1) + S(n, k-1) (otherwise)

So S(n) <= n + (n+1) S(n-1)

which works out to (n+1)! no?

So the n=6 case should be only 7 times slower than the n=5 case.

What did I miss?

One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

? once someone goes through the process of figuring out how that works, they're 99% of the way to understanding "fix". from there, "flip id" is the same as "flip ($)", and "(=<<)" is just "flip (>>=)".

And if you need two posts on a forum, hell if you need even one post, to explain how your single line of code works, your code is unreadable and should be rewritten.