## Runtime

A place to discuss the science of computers and programs, from algorithms to computability.

Formal proofs preferred.

Moderators: phlip, Moderators General, Prelates

Boom9001
Posts: 2
Joined: Wed Apr 06, 2011 10:15 pm UTC

### Runtime

I'm a highschool student in my third year of Computer Science and while studying for a competition i was going over Big-O notation for runtime and while reviewing i saw that it claimed the 2^n runtime to be the worst runtime out of the oens the book reviewed.
My questions was what kind of code would actually result in this runtime. I was trying to think what style of loops would create this and could not for the life of me think of how a 2^n runtime would be created naturaly in code.
I tried searching for a way one would create this runtime just out of interest but couldn't find any explanation on how one would arrive with it. It has perplexed me and my teacher couldn't give me a answer, so does anyone have an example or explanation of what type of loops or programm would cause this terrible runtime

phlip
Restorer of Worlds
Posts: 7573
Joined: Sat Sep 23, 2006 3:56 am UTC
Location: Australia
Contact:

### Re: Runtime

A simple example is the "subset sum" problem - that is, given a list of numbers (eg {1, 2, 5, 7}), can you find a subset that sums to a specific target value (eg 10)? For this example, the answer is "yes", as the subset {1,2,7} adds up to 10.

Now, the obvious simple program to do this is just one that checks every single subset, to see if it has the right total. Since there are 2n subsets, and each one has up to n elements that you need to add up, this algorithm will take O(2nn) time to run.

Worse still are problems involving permutations... you're given a list of n things, and you need to find which ordering is "best" in some way. If you simply try every permutation, there's n! of those, and O(n!) is bigger than O(2n), or even any exponential.

Code: Select all

`enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};void ┻━┻︵​╰(ಠ_ಠ ⚠) {exit((int)⚠);}`
[he/him/his]

Laguana
Posts: 49
Joined: Sat Jan 19, 2008 10:13 pm UTC

### Re: Runtime

Exponential runtimes ([imath]2^n[/imath] or even [imath]2^{2^n}}[/imath]) can arise from exploring trees, as well. For example, a binary tree n layers deep will have [imath]2^n[/imath] nodes.

Another possibility is from determinising some algorithms. For example, propositional logic is in NP meaning it can be polynomial if you make the "right" decisions. In practice we don't know these decisions, so we can try all of them which results in an exponential.

Edit:
[...] and O(n!) is bigger than O([imath]2^n[/imath]), or even any exponential.

Is that true? Isn't n! < [imath]n^n = 2^{n \log_2 n}[/imath]? Or are you not counting that as an exponential?

Boom9001
Posts: 2
Joined: Wed Apr 06, 2011 10:15 pm UTC

### Re: Runtime

thanks that question was really messing with me cause i couldn't think of how to do it.
as for the n! > 2^n the book i was using didnt mention n! but i would imagine it is a worse runtime because n! would logically yield higher values than 2^n, but like i said the book was being rather simplistic because for the competition I'm in n! is not seen but extremely rare cases so it was left out. It had only ranked 2^n,n^3,n^2,nlogn,n,logn,and 1 so in knowing how all the others could be done i was perplexed on the 2^n

kmatzen
Posts: 214
Joined: Thu Nov 15, 2007 2:55 pm UTC
Location: Ithaca, NY

### Re: Runtime

Laguana wrote:Is that true? Isn't n! < [imath]n^n = 2^{n \log_2 n}[/imath]? Or are you not counting that as an exponential?

Sure... But O(2^{n \log_2 n}) = 2^n not 2^{n \log_2 n} = O(2^n). Just plot the two functions. n! > 2^n for n > 3. If you want a more rigorous proof, then use the gamma function and do it analytically.

Yakk
Poster with most posts but no title.
Posts: 11129
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

### Re: Runtime

O notation is pretty useless in the exponentials.

There are complexity hierarchies that don't break down as easily in the exponentials.

O( 2^(2n) ) = O( (2^n)^2 ), which I believe is bigger than O( 2^n ) (but I'm less than certain -- feel free to show I'm wrong!)
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

phlip
Restorer of Worlds
Posts: 7573
Joined: Sat Sep 23, 2006 3:56 am UTC
Location: Australia
Contact:

### Re: Runtime

Laguana wrote:Is that true? Isn't n! < [imath]n^n = 2^{n \log_2 n}[/imath]? Or are you not counting that as an exponential?

n! is super-exponential, in the sense that n! > O(an) for any constant a. nn is super-exponential too.

Yakk wrote:O( 2^(2n) ) = O( (2^n)^2 ), which I believe is bigger than O( 2^n ) (but I'm less than certain -- feel free to show I'm wrong!)

O(22n) = O(4n), which is bigger than O(2n), yes. Exponentials with different bases are different sizes, in terms of big-O.

The standard definition isn't especially intuitive, so I like to rearrange it: If f(n)/g(n) is bounded from above for sufficiently large n, then f = O(g). If f(n)/g(n) increases without bound for large n, then f > O(g). 4n/2n = (4/2)n = 2n increases without bound, so O(4n) > O(2n).

Code: Select all

`enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};void ┻━┻︵​╰(ಠ_ಠ ⚠) {exit((int)⚠);}`
[he/him/his]

b.i.o
Green is the loneliest number
Posts: 2519
Joined: Fri Jul 27, 2007 4:38 pm UTC
Location: Hong Kong

### Re: Runtime

For exponentials, I find it often helps to take the log (base 2, or whatever base is convenient) of things. (Sometimes multiple times in a row.) Comparing them afterwards often makes a lot more sense.

Parsifal
Posts: 113
Joined: Thu Feb 28, 2008 1:35 am UTC

### Re: Runtime

http://projecteuler.net/index.php?section=problems&id=18 is 2n if you brute force it, although there is an efficient solution that is closer to n2.

What does this mean for runtime? This problem has 15 rows and can be brute-forced; the larger problem #67 with 100 rows cannot!
The larger problem can be solved in <10ms on decent hardware using an efficient solution, but would take 20,000,000,000 years to brute force on a 1,000 GHz computer (and that's assuming you can check a solution in 1 clock cycle).

To clarify: this kind of algorithm in general arises when you follow the most obvious path to evaluate all solutions across N steps or levels where there are K choices at each point. So on a binary tree with 100 levels, there would be KN-1 or 299 possible solutions.

Meteorswarm
Posts: 979
Joined: Sun Dec 27, 2009 12:28 am UTC
Location: Ithaca, NY

### Re: Runtime

phlip wrote:A simple example is the "subset sum" problem - that is, given a list of numbers (eg {1, 2, 5, 7}), can you find a subset that sums to a specific target value (eg 10)? For this example, the answer is "yes", as the subset {1,2,7} adds up to 10.

This is a not-so-good example because we don't know that subset sum doesn't have a polynomial time solution. P and NP and all that.

It's worth noting that for any function, you can come up with an problem that takes that long to compute. How about Ackerman time? Sure! I forget how though, I'll have to go re-ask my algorithms professor Although something like "enumerate all the numbers up to f(x)," or "how many primes are there less than f(x)?" are obvious ones.
The same as the old Meteorswarm, now with fewer posts!

phlip
Restorer of Worlds
Posts: 7573
Joined: Sat Sep 23, 2006 3:56 am UTC
Location: Australia
Contact:

### Re: Runtime

Meteorswarm wrote:This is a not-so-good example because we don't know that subset sum doesn't have a polynomial time solution. P and NP and all that.

I didn't say that the subset-sum problem doesn't have a polynomial-time solution, I just said the naive solution is O(n2n).

Complexity values like O(n) etc relate to specific algorithms. Complexity classes like P and NP relate to problems. An individual problem will likely have many different algorithms that solves it, which can all have a different complexity... for instance, comparison sorting a list is in P, but different sorting algorithms can be O(n log n) or O(n2) or O(n!), depending on how they work.

Code: Select all

`enum ಠ_ಠ {°□°╰=1, °Д°╰, ಠ益ಠ╰};void ┻━┻︵​╰(ಠ_ಠ ⚠) {exit((int)⚠);}`
[he/him/his]

squareroot
Posts: 548
Joined: Tue Jan 12, 2010 1:04 am UTC
Contact:

### Re: Runtime

Meteorswarm wrote:How about Ackerman time? Sure! I forget how though, I'll have to go re-ask my algorithms professor .
Although just evaluating the Ackermann function would obviously take at least this long, one pretty fun of way doing things:
-Simulate a disjoint-set program, (pseudo-)randomly creating, merging, and finding elements.
-After each instruction, look at the elapsed number of steps, and divide by the number of actions (create, merge, find) you've done.
-If that number is greater than n, halt.
This will take O(A(k*n,k*n)) steps, where k is some constant, depending on implementation.
<signature content="" style="tag:html;" overused meta />
Good fucking job Will Yu, you found me - __ -

Return to “Computer Science”

### Who is online

Users browsing this forum: No registered users and 3 guests