ctdonath wrote:Next, this must become a thing: someone write an app which continually listens, identifies the dominant ambient-sound note, and tweets it in real time.
Bonus points for tweeting chords & duration.
I play flute, and have just done a very rough calculation that I regularly generate > 300 notes a minute (more on reels and jigs, less on waltzes).
Put a microphone in front of the instrument, run that through some pitch recognition code, and every time the note is detected to have changed tweet the previous note and the time difference.
Even easier for MIDI instruments, because the note and duration data can be extracted directly from the MIDI data flow.
MIDI would work for chords as well. The real-time analysis of chords, and polyphony generally, delivered 'acoustically' (e.g. via sound waves, rather than as a stream of MIDI data), however; a whole different ball game. Acoustic pitch recognition works pretty well with single-line instruments, but even two notes at the same time is beyond anything I'm aware of and the complexity level increases n-fold with each additional simultaneous tone. I don't know enough about how CDs and MP3 are encoded, but I'm guessing that for our current purposes that would be the same as 'acoustic' delivery because the data is not directly mappable to pitches in the way we want.