Page 1 of 1

Cosine for x?

Posted: Fri Jun 20, 2008 1:25 am UTC
by Kaj
I'm programming a math header in C++, making my own cos function.

I read that:

[math]cos x = 1 - x^2 / 2! + x^4 / 4! - x^6 / 6! +...[/math]

Trying to code an infinite series is impossible.

I need an alternate way to do this. Any ideas? :D

Re: Cosine for x?

Posted: Fri Jun 20, 2008 1:31 am UTC
by z4lis
Figure out how precise you need your approximation to be, what kind of range x will fall in, and then just cut off the series at some point.

Re: Cosine for x?

Posted: Fri Jun 20, 2008 1:56 am UTC
by jmorgan3
Kaj wrote:Trying to code an infinite series is impossible.

Code: Select all

n=0
sum=0
while TRUE
     sum = (sum + ((-1)^n) * x^(2n))/((2n)!)
     n++
end

Re: Cosine for x?

Posted: Fri Jun 20, 2008 3:07 am UTC
by Pathway
That is a Taylor expansion. Wikipedia shall serve you well. Use one of the forms for the error term to estimate your error--I recommend the integral term.

Re: Cosine for x?

Posted: Fri Jun 20, 2008 3:14 am UTC
by Bullislander05
Okay. I'm assuming you haven't had calculus before, and if you have, you haven't done alternating series errors yet.

Anyways, the alternating series theorem (I believe? Correct me if I'm wrong and it's something different) states that |S - Sn+1| > an+1 for any converging alternating series, where S is the true sum of the series, Sn is the partial sum of the series through n terms, and an is the nth term of the series (which is always positive).

If you're looking for the cosine of some variable with N decimal places of accuracy, you're going to want your error (or the term |S - Sn+1|) to be less than N 0s and 5 (example: You want 3 decimal places of accuracy, so your error must be less than .0005)

So, with a little bit of substitution, you'll get: E > an+1. Then, the only work left to do is to find the first term that is less than your error.

If you're making your own math header, you'll know what you'll want your precision to be. So you can plug that into the error formula above, and for your terms use: x^(2n)/(2n)!

Then, plug in n+1 into that equation, and n will be how many terms you need to be precise to your specified level of accuracy. So you can code your cos(x) function something like:

Code: Select all

n = //your n calculated above;
for(int i = 0; i <= n; i++)
     sum = (sum + ((-1)^i)*x^(2n))/((2n)!)
return sum;

Re: Cosine for x?

Posted: Fri Jun 20, 2008 8:31 am UTC
by Zohar
Also, no one seems to have mentioned it - if you use the Taylor approximation around zero (like was suggested), it won't be very accurate for x=1000, for example. You should use the fact that cosine is a periodic function. What you should do is:
0. If x is somewhere between -pi and +pi, do nothing. Otherwise,
1. Divide x by 2*pi, round down.
2. Subtract from x the number you got time 2*pi.
This will give you a value in the [-pi,pi] area and now you don't need to have a function that's accurate everywhere, only in that part.

For example, if x=1000, then x/(2*pi)=159.1549. So now you don't calculate cos(1000), you should calculate cos(1000-159*2*pi)=cos(0.97353). For this to work you also need a good enough approximation of pi, though.

You could get even better results if you only use the [0,pi] part because cos(x)=cos(-x), but that will be slightly more complicated and you'll have to develop the Taylor series around pi/2. So I suggest do what I did and then use the Taylor approximation.

Re: Cosine for x?

Posted: Fri Jun 20, 2008 8:44 am UTC
by Robin S
Zohar wrote: What you should do is:
0. If x is somewhere between -pi and +pi, do nothing. Otherwise,
1. Divide x by 2*pi, round down.
2. Subtract from x the number you got time 2*pi.
Actually, since you're coding this, you should use a built-in modulus function.

You could get even better results if you only use the [0,pi] part because cos(x)=cos(-x), but that will be slightly more complicated and you'll have to develop the Taylor series around pi/2.
Actually, you might be able to speed things up even further by restricting your domain to [0,π/2] since cos is antisymmetric about π/2. Specifically:

1. Find x modulo 2π.
2. If this value is greater than π, subtract it from 2π.
3. If this value is greater than π/2, make a note of this fact and then subtract it from π.
4. Use the first few terms of the power series.

You might want to use the Taylor series about π/4 to speed things up a little, but as Zohar said that might be complicating things too much. To be honest, it might be faster to computer using the power series for sin. I'm not sure.

Re: Cosine for x?

Posted: Fri Jun 20, 2008 9:38 am UTC
by jestingrabbit

Re: Cosine for x?

Posted: Fri Jun 20, 2008 9:48 am UTC
by Robin S
Come to think of it, I've got a related question. What's the fastest way to compute both the cos and sin of the same (real) variable? I'm guessing there's some way to do them simultaneously, but is that really faster than getting the hardware to do both separately?

Should I post a separate thread in Coding? It's seems there's a fair amount of overlap with this sort of thing.

Re: Cosine for x?

Posted: Fri Jun 20, 2008 10:56 am UTC
by Token
Robin S wrote:Come to think of it, I've got a related question. What's the fastest way to compute both the cos and sin of the same (real) variable? I'm guessing there's some way to do them simultaneously, but is that really faster than getting the hardware to do both separately?

Should I post a separate thread in Coding? It's seems there's a fair amount of overlap with this sort of thing.

cos2x + sin2x = 1?

Re: Cosine for x?

Posted: Fri Jun 20, 2008 10:58 am UTC
by Robin S
That's pretty slow, as it requires taking a square root. Also, it doesn't given the sign.

This is why I thought maybe it belonged in Coding, rather than Maths.

Re: Cosine for x?

Posted: Fri Jun 20, 2008 11:21 am UTC
by Token
I was under the impression that computing square roots is more efficient than just doing the second cos/sin from scratch (and the sign is cheap). But yeah, this probably belongs in Coding.

Re: Cosine for x?

Posted: Fri Jun 20, 2008 12:30 pm UTC
by Kaj
I was going to put it in coding but coding was full of posts like "What API should I use?". This is more math oriented anyway.

Re: Cosine for x?

Posted: Fri Jun 20, 2008 1:21 pm UTC
by z4lis
You could calculate them at the same time...

take your closed-form sine summation and in the exponent for x (using the wiki article's variable choices) add a variable called "c" (for counter) to it. So it'd be x^(2n+c)

Now, make a program loop around the sum, but incorporate this c value. Start with n=0, c=0. Add something like "If c is zero, add the sum we just found to sine. Set c=1. Go to beginning. If c is one, add the sum we just found to cosine, set c=0, set n=n+1. Go to beginning."

Perhaps that's not what you had in mind.

Re: Cosine for x?

Posted: Fri Jun 20, 2008 1:37 pm UTC
by Robin S
Would that actually be significantly faster than doing the two loops separately? I suppose it might...

Re: Cosine for x?

Posted: Fri Jun 20, 2008 4:01 pm UTC
by headprogrammingczar
If you don't need both to be particularly accurate, you can store a hash table of cosine and sine values. Then, given a cosine value and a theta, you can use Newton's Method to get really close to the sine.

Re: Cosine for x?

Posted: Mon Jun 23, 2008 2:23 am UTC
by ConMan
I can't help but think that there'd be some sort of trick using the fact that sin and cos are (up to a sign factor) derivatives of each other. So if you had 1st order approximations for each, you could feed those into each other to give quickly-converging 2nd order approximations for both. My brain is failing to provide a decent method at the moment, though.

Re: Cosine for x?

Posted: Mon Jun 23, 2008 2:59 am UTC
by ConMan
OK, I've thought of a method that uses numerical integration. I'm not 100% convinced of its accuracy, but here goes:

First, assume that [imath]x \in (0, \pi/2)[/imath], and if it isn't then use modulus as described earlier (0 and [imath]\pi/2[/imath] are degenerate cases that you can test for and give exact solutions for, so I'll ignore them because I don't know what they'll do to my algorithm). Select an integer n, that will affect both the accuracy of the result and the number of computations required. Partition the interval [0, x] with the sequence [imath]\left(x_0, x_1, x_2, \ldots, x_{n-1}, x_n\right) = \left(0, h, 2h, \ldots, \left(n-1\right)h, nh\right)[/imath], where [imath]h = x/n[/imath]. Then (apologies for the poor pseudo-code):

Code: Select all

Set y(0) = 0; # Approximation for sine
Set z(0) = 1; # Approximation for cosine

For k = 1 to n
 Let z(k) = z(k-1) - h * y(k-1);
 Let y(k) = y(k-1) + h * z(k-1); # Using z(k) instead of z(k-1) may give a better answer
Loop k


Then y(n) should be an approximation of sin(x), and z(n) an approximation of cos(x), that is [imath]\mathcal{O}\left(1/n^2\right)[/imath], if I've got it right. There are 2 additions and 2 multiplications in each iteration, so the overall complexity is something like [imath]\mathcal{O}\left(4n\right)[/imath]. Depending on your memory requirements, you can discard y(k) and z(k) at each step, or store them to draw a pretty graph showing how the approximations look.

I make no guarantees as to this being more efficient or accurate than calculating the two functions separately.