## 2070: "Trig Identities"

**Moderators:** Moderators General, Prelates, Magistrates

### Re: 2070: "Trig Identities"

"The Machine Stops", by E. M. Forster (1909)

Barry Schwartz TED Talk: "The Paradox of Choice" (Featuring the True Secret to Happiness)

Barry Schwartz TED Talk: "The Paradox of Choice" (Featuring the True Secret to Happiness)

### Re: 2070: "Trig Identities"

It looks like Randall made a mistake in the eighth equation.

should be

if he's starting with the cosine equation, or alternatively

if he's starting from secant.

Code: Select all

`casθ = o/c`

should be

Code: Select all

`casθ = a^2/(oc)`

if he's starting with the cosine equation, or alternatively

Code: Select all

`casθ = c/e`

if he's starting from secant.

### Re: 2070: "Trig Identities"

Feynman complains in his book about the trig notation for exactly this reason: i.e. that it looks like multiplication. He invented his own notation for sin (a kind of sigma with a long top bit iirc). Personally I'm more disturbed by upper indices that aren't exponentiation.

xtifr wrote:... and orthogon merely sounds undecided.

### Re: 2070: "Trig Identities"

orthogon wrote:Personally I'm more disturbed by upper indices that aren't exponentiation.

Would be okay with Einstein index notation x

_{b}

^{a}if it was being used to represent the Vandermonde matrix?

I NEVER use all-caps.

### Re: 2070: "Trig Identities"

orthogon wrote:Feynman complains in his book about the trig notation for exactly this reason: i.e. that it looks like multiplication. He invented his own notation for sin (a kind of sigma with a long top bit iirc). Personally I'm more disturbed by upper indices that aren't exponentiation.

I've always been a little bothered by the superscript -1 for inverse. It's made only slightly less disturbing by the fact that the reciprocal function is its own inverse.

### Re: 2070: "Trig Identities"

da Doctah wrote:orthogon wrote:Feynman complains in his book about the trig notation for exactly this reason: i.e. that it looks like multiplication. He invented his own notation for sin (a kind of sigma with a long top bit iirc). Personally I'm more disturbed by upper indices that aren't exponentiation.

I've always been a little bothered by the superscript -1 for inverse. It's made only slightly less disturbing by the fact that the reciprocal function is its own inverse.

I think the idea there is that rather than writing, say f(f(f(f(f(f()))))) you can just write f

^{6}() - and that notation then extends to f

^{-1}() being the inverse of f()

### Re: 2070: "Trig Identities"

da Doctah wrote:orthogon wrote:Feynman complains in his book about the trig notation for exactly this reason: i.e. that it looks like multiplication. He invented his own notation for sin (a kind of sigma with a long top bit iirc). Personally I'm more disturbed by upper indices that aren't exponentiation.

I've always been a little bothered by the superscript -1 for inverse. It's made only slightly less disturbing by the fact that the reciprocal function is its own inverse.

I was never a fan of superscript -1 either. Might be handy in dimensional analysis, but for just writing equations why trade 1 stroke (/) for 2 or 3?

And don't get me started on that three stroke ÷ abomination.

- Soupspoon
- You have done something you shouldn't. Or are about to.
**Posts:**4038**Joined:**Thu Jan 28, 2016 7:00 pm UTC**Location:**53-1

### Re: 2070: "Trig Identities"

Heimhenge wrote:And don't get me started on that three stroke ÷ abomination.

I agree with you 1000‰ on that.

### Re: 2070: "Trig Identities"

Soupspoon wrote:Heimhenge wrote:And don't get me started on that three stroke ÷ abomination.

I agree with you 1000‰ on that.

I see what you did there and also agree 100%.

### Re: 2070: "Trig Identities"

rmsgrey wrote:da Doctah wrote:

I've always been a little bothered by the superscript -1 for inverse. It's made only slightly less disturbing by the fact that the reciprocal function is its own inverse.

I think the idea there is that rather than writing, say f(f(f(f(f(f()))))) you can just write f^{6}() - and that notation then extends to f^{-1}() being the inverse of f()

Doesn't that imply that cos²t should mean cos(cos t)?

### Re: 2070: "Trig Identities"

itaibn wrote:orthogon wrote:Personally I'm more disturbed by upper indices that aren't exponentiation.

Would be okay with Einstein index notation x_{b}^{a}if it was being used to represent the Vandermonde matrix?

Ha! Maybe, although no doubt the dimension with the increasing powers turns out to correspond to the lower index...

To be honest, not liking upper indices is just my excuse for not understanding tensors. (The other day down the pub my friend was trying to explain covariance and contravariance in computer science, whilst I was trying to explain as best I could the homonymous concepts in tensors. That's how we roll. After my explanation, he just said "what, so it's just the difference between multiplication and division?")

xtifr wrote:... and orthogon merely sounds undecided.

### Re: 2070: "Trig Identities"

orthogon wrote:itaibn wrote:orthogon wrote:Personally I'm more disturbed by upper indices that aren't exponentiation.

Would be okay with Einstein index notation x_{b}^{a}if it was being used to represent the Vandermonde matrix?

Ha! Maybe, although no doubt the dimension with the increasing powers turns out to correspond to the lower index...

To be honest, not liking upper indices is just my excuse for not understanding tensors. (The other day down the pub my friend was trying to explain covariance and contravariance in computer science, whilst I was trying to explain as best I could the homonymous concepts in tensors. That's how we roll. After my explanation, he just said "what, so it's just the difference between multiplication and division?")

Rim shot?

I know so little about tensors that I can't be sure, but orthogon's comment almost sounds like one of those jokes which presuppose obscure knowledge.

But like I said, as far as I know, it could make perfect sense. Just sayin' ...

### Re: 2070: "Trig Identities"

da Doctah wrote:rmsgrey wrote:I think the idea there is that rather than writing, say f(f(f(f(f(f()))))) you can just write f^{6}() - and that notation then extends to f^{-1}() being the inverse of f()

Doesn't that imply that cos²t should mean cos(cos t)?

It absolutely should!

Trying to explain to students why sin

^{-1}(x) =/= csc(x) is the worst.

~

I always enjoy these silly maths ones. They tend to head up my powerpoints after a while. Just got done with using the 'proof by intimidation' one.

- Eebster the Great
**Posts:**3299**Joined:**Mon Nov 10, 2008 12:58 am UTC**Location:**Cleveland, Ohio

### Re: 2070: "Trig Identities"

The superscript notation for iterated functions makes a lot of sense to me. f

If anything, I find the exceptional cases of trigonometric and logarithmic functions much more annoying. Why should log

Other uses for superscripts may be good or bad depending on the context. Sometimes it makes sense to have multiple indices, like in Einstein summation notation, and in those cases compactness is probably better than a complete lack of ambiguity when it is clear from context. But sometimes lists seem to be indexed by superscript instead of subscript for no reason whatsoever, and I find that irritating.

^{2}(x) = f(f(x)). f generates a (typically infinite) cyclic group under composition, so f^{n}in functional notation corresponds to f^{n}in multiplicative notation in that group. It also extends to derivatives and antiderivatives, which is nice, because how else are you going to write an ordinary second derivative if not^{d²}/_{dx²}, D^{2}, or similar? I guess primes and dots are sometimes options, but they don't scale well to higher orders. Additionally, we can define half- and other fractional iterates this way. For instance, if f(x) = 8x^{4}, then f^{½}(x) = 2x^{2}, because f^{½}(f^{½}(x)) = f(x). This also fits the usual exponent properties. And of course it justifies f^{-1}as the inverse of f.If anything, I find the exceptional cases of trigonometric and logarithmic functions much more annoying. Why should log

^{4}x = (log x)^{4}instead of log log log log x? I mean, in the former case, you could just use an exponent anyway and all you need is a pair of parentheses, but in the latter case, you could actually save a lot of space. Granted, that doesn't really happen with trig functions, but I still think it's confusing, especially since in that case, sin^{-1}(x) = arcsin(x), in line with iterated composition notation, but sin^{2}(x) = [sin(x)]^{2}, in line with the exponential notation used for logs.Other uses for superscripts may be good or bad depending on the context. Sometimes it makes sense to have multiple indices, like in Einstein summation notation, and in those cases compactness is probably better than a complete lack of ambiguity when it is clear from context. But sometimes lists seem to be indexed by superscript instead of subscript for no reason whatsoever, and I find that irritating.

### Re: 2070: "Trig Identities"

da Doctah wrote:rmsgrey wrote:

I've always been a little bothered by the superscript -1 for inverse. It's made only slightly less disturbing by the fact that the reciprocal function is its own inverse.

I think the idea there is that rather than writing, say f(f(f(f(f(f()))))) you can just write f^{6}() - and that notation then extends to f^{-1}() being the inverse of f()

Doesn't that imply that cos²t should mean cos(cos t)?

Yes, but it runs into some other issues.

f

^{n}as a concept only works when f maps from a set to itself. Trig functions only kinda satisfy that description - yes, they map a real number onto a real number, so, at least in theory, you can iterate them, but the semantics get a bit wonky - cos and sin are really taking an angle to a number, so, while, numerically it's possible to iterate them, it's nonsense to do.

Meanwhile, it's common to want to deal with (cos(x))

^{2}and (sin(x))

^{2}but not caring about what the x is so it's easier to parse "cos

^{2}(5x+13)" than "(cos(5x+13))

^{2}- and even more so for more complicated expressions for the angle, so the concept that's going to be more useful almost every time ends up as the convention, despite it being an exception to the more general convention.

### Re: 2070: "Trig Identities"

What rmsgrey said. Iterated trig functions are pretty rare, compared to powers of trig functions. I guess iterated trig functions can be used to make fractals, and of course there's the Dottie number, but there's not much practical application for them.

### Re: 2070: "Trig Identities"

Heimhenge wrote:orthogon wrote:itaibn wrote:orthogon wrote:Personally I'm more disturbed by upper indices that aren't exponentiation.

Would be okay with Einstein index notation x_{b}^{a}if it was being used to represent the Vandermonde matrix?

Ha! Maybe, although no doubt the dimension with the increasing powers turns out to correspond to the lower index...

To be honest, not liking upper indices is just my excuse for not understanding tensors. (The other day down the pub my friend was trying to explain covariance and contravariance in computer science, whilst I was trying to explain as best I could the homonymous concepts in tensors. That's how we roll. After my explanation, he just said "what, so it's just the difference between multiplication and division?")

Rim shot?

I know so little about tensors that I can't be sure, but orthogon's comment almost sounds like one of those jokes which presuppose obscure knowledge.

But like I said, as far as I know, it could make perfect sense. Just sayin' ...

It wasn't really intended as a joke, except that the simplicity of my friend's explanation is incongruously out-of-keeping with the supposed complexity of the concepts. My understanding is that co-vectors transform with the inverse of the transformation; but there's also a transpose in there, so that the rotational part gets reversed twice (because the transpose of an orthonormal matrix is its inverse). So on balance it's only the scaling element that works different ways. Vectors get scaled down, whilst co-vectors get scaled up. Multiplication vs division, in other words.

(When I say there's a transpose, I mean that the matrix post-multiples the co-vector, so it's as though the matrix is transposed).

xtifr wrote:... and orthogon merely sounds undecided.

### Re: 2070: "Trig Identities"

i was immediately reminded of the Greeks: the crazy names quants have come up with for specific second and third derivatives of price: Delta Gamma Vanna Charm Vega Vomma Veta Theta Zomma Color Ultima. Makes you wonder if anyone in the profession actually takes it seriously. I kinda hope the whole field is a lengthy troll of finance guys by Math PhDs.

"In no set of physics laws do you get two cats." - doogly

Return to “Individual XKCD Comic Threads”

### Who is online

Users browsing this forum: Raidri and 118 guests