In the previous post I've used the word "pitch" to refer to the frequency of a musical note, to make it clear that I was not referring to the voice of the instrument, which is more commonly recognized as being the province of harmonics.
Henceforth, I may revert to using "tone", or use them interchangeably.
Friday, December 29, 2006
musing about the problem space
It's not as though scales composed of harmonics are really all that unfamiliar.
Woodwind instruments naturally produce pitches that are harmonics (integer multiples) of some fundamental frequency, determined by the size and shape of the instrument's resonant cavity. Brass instruments also produce harmonics, and those with either valves or slides are capable of altering the fundamental frequency by modifying the resonant cavity, making available pitches which fall between those which would otherwise be possible. Generally speaking, the notes either can produce are mapped onto the equal tempered scale, although it's quite a stretch in some cases.
Non-fretted strings can produce any pitch whatsoever within their ranges. Orchestras, dominated by woodwinds and strings, may gravitate towards scales that could be accurately described in terms of harmonics, although the scores they work from are based on the equal tempered scale. Those scores may include pitch bending cues to encourage this, even if the harmonic nature of the tonal target is obfuscated to the point of being conceptually absent.
Pianos and most modern fretted instruments, guitars included, are designed to support the equal tempered scale, but many electronic keyboards can be programmed for a wide variety of scales, mapping their keys to pitches that are a little higher or lower than they represent in the equal tempered scale.
The twin problems have always been building instruments offerring a wide enough selection of discrete pitches to support a wide variety of scales without tempering, and creating a notation that represents harmonic relationships in music compactly and in a manner that can be read quickly enough for performance.
The historical upshot of the first of these was that a musician with a repertoire spanning many scales would have to carry many instruments. The latter meant that no such notational system ever became commonplace, so it wasn't really possible to write down the music; it could only be preserved by passing along the knowledge of playing it.
Programmable devices (keyboards...) are solving the former, and opening the way to address the latter. We can easily record sound these days, of course, but that's not the same thing. Say you had available a performance instrument (or interface to a synthesizer) capable of a wide range of pitches in any given configuration and of being reconfigured on the fly, how could a composition best be represented to enable you to play while reading along.
While some variation on standard musical notation, read left to right with higher notes appearing higher on the staff, might be made to work, I think the same technology which is delinking sound production from physical constraints may also provide some interesting possibilities for the presentation of music as a performable abstraction. I have some ideas along these lines, but nothing yet so well developed that it doesn't seem better to leave it to your imagination.
Given a system which represents pitches as integer multiples of the frequency of some fundamental, or in all but the simplest cases, as integer ratio (e.g. 4/3 or 5/4) multiples of the frequency of some reference pitch, what sort of notation would encapsulate such relationships among the simultaneous/sequential notes that comprise a musical piece such that someone familiar with the notation could follow along quickly enough to reproduce the piece while reading?
Woodwind instruments naturally produce pitches that are harmonics (integer multiples) of some fundamental frequency, determined by the size and shape of the instrument's resonant cavity. Brass instruments also produce harmonics, and those with either valves or slides are capable of altering the fundamental frequency by modifying the resonant cavity, making available pitches which fall between those which would otherwise be possible. Generally speaking, the notes either can produce are mapped onto the equal tempered scale, although it's quite a stretch in some cases.
Non-fretted strings can produce any pitch whatsoever within their ranges. Orchestras, dominated by woodwinds and strings, may gravitate towards scales that could be accurately described in terms of harmonics, although the scores they work from are based on the equal tempered scale. Those scores may include pitch bending cues to encourage this, even if the harmonic nature of the tonal target is obfuscated to the point of being conceptually absent.
Pianos and most modern fretted instruments, guitars included, are designed to support the equal tempered scale, but many electronic keyboards can be programmed for a wide variety of scales, mapping their keys to pitches that are a little higher or lower than they represent in the equal tempered scale.
The twin problems have always been building instruments offerring a wide enough selection of discrete pitches to support a wide variety of scales without tempering, and creating a notation that represents harmonic relationships in music compactly and in a manner that can be read quickly enough for performance.
The historical upshot of the first of these was that a musician with a repertoire spanning many scales would have to carry many instruments. The latter meant that no such notational system ever became commonplace, so it wasn't really possible to write down the music; it could only be preserved by passing along the knowledge of playing it.
Programmable devices (keyboards...) are solving the former, and opening the way to address the latter. We can easily record sound these days, of course, but that's not the same thing. Say you had available a performance instrument (or interface to a synthesizer) capable of a wide range of pitches in any given configuration and of being reconfigured on the fly, how could a composition best be represented to enable you to play while reading along.
While some variation on standard musical notation, read left to right with higher notes appearing higher on the staff, might be made to work, I think the same technology which is delinking sound production from physical constraints may also provide some interesting possibilities for the presentation of music as a performable abstraction. I have some ideas along these lines, but nothing yet so well developed that it doesn't seem better to leave it to your imagination.
Given a system which represents pitches as integer multiples of the frequency of some fundamental, or in all but the simplest cases, as integer ratio (e.g. 4/3 or 5/4) multiples of the frequency of some reference pitch, what sort of notation would encapsulate such relationships among the simultaneous/sequential notes that comprise a musical piece such that someone familiar with the notation could follow along quickly enough to reproduce the piece while reading?
Saturday, December 02, 2006
back burner != forgotten
Rumor has it that Apple has a tablet computer with a touch sensistive screen in the works. This is very interesting to me because it could simplify creating user interfaces in software for virtual instruments which present their tonal options in terms of harmonics, rather than force-fit to the equal-tempered scale. It might even be possible to create a usable performance instrument within the constraits of such a device, without having to migrate it to a larger display or custom-built gadget. I certainly intend to try.
Meanwhile, you can learn more about harmonic-based music at the wesite of The Just Intonation Network.
Meanwhile, you can learn more about harmonic-based music at the wesite of The Just Intonation Network.
Thursday, September 28, 2006
on the back burner
While there's not recently anything (like a new version of the app) to show for it, there's not a day that goes by that I don't at least give some thought to this project. I'll eventually get back to writing code for it.
A contributing factor in my not doing so just now is that there's a new version of the language (Objective-C 2.0) coming along with the next major version of Xcode, which should ship at about the same time as the next major version of Mac OS X (Leopard, a.k.a. 10.5), due sometime during the first half of 2007.
While details of Objective-C 2.0 aren't yet public, I've seen enough to make me think I'm going to want to use it, possibly even at the expense of making the app not be backwardly compatible with pre-Leopard versions of OS X.
A contributing factor in my not doing so just now is that there's a new version of the language (Objective-C 2.0) coming along with the next major version of Xcode, which should ship at about the same time as the next major version of Mac OS X (Leopard, a.k.a. 10.5), due sometime during the first half of 2007.
While details of Objective-C 2.0 aren't yet public, I've seen enough to make me think I'm going to want to use it, possibly even at the expense of making the app not be backwardly compatible with pre-Leopard versions of OS X.
Sunday, August 20, 2006
history of the project
In late spring or early summer, 1995, I returned to The WELL after a break of approximately six months, to discover that David Doty, the editor of 1/1, The Journal of the Just Intonation Network, had become a regular visitor to a conference I'd had a part in launching. He wasn't there to proselytize Just Intonation, but he did provide a short synopsis and respond to questions.
For me, it was a long-awaited explanation why every piano, fretted, and valved brass instrument I'd ever heard sounded out of tune, and why non-fretted strings, woodwinds, and human voices not chained to the standard scale sounded so much sweeter -- and also why the concept of scales as expressed in standard musical notation seems so tortured to me.
Several years of gestation followed, before I began to have specific ideas about using computers as aids, to help identify integer ratio scales, in composition, and even as instruments or backend processors for dedicated interface hardware. However, my initial effort ( 1 ) fell short, both for being poorly conceived and for being incapable of producing sound; it only calculated numbers.
After becoming convinced that there was no way not involving exhorbitant effort to synthesize sound from a web page, I switched to programming for Mac OS X, and eventually produced a working program ( 2 ) that both better represents the theory of ratio based music and produces sound.
Having gotten that far with it, I took a deep breath and set the project aside until such time as I had fresh enthusiasm for it, but continued to study Mac OS X and learned something about the best practices of Mac OS X programming, largely ignored in the program linked above.
I also continued to think through the basics, how computing could play a pivotal role in ratio based music, and to express those thoughts in code, starting over many times but never arriving at a sufficiently clear vision to warrant the effort of following through to a completed application.
That's the current state of the project. I know a lot more about programming, and have a jumble of ideas which may or may not add up to anything.
So the first order of business here will be to express some of those ideas as clearly as I can manage, and maybe in the process I'll see how they might fit together.
Don't be surprised if it takes some time to amount to anything...
For me, it was a long-awaited explanation why every piano, fretted, and valved brass instrument I'd ever heard sounded out of tune, and why non-fretted strings, woodwinds, and human voices not chained to the standard scale sounded so much sweeter -- and also why the concept of scales as expressed in standard musical notation seems so tortured to me.
Several years of gestation followed, before I began to have specific ideas about using computers as aids, to help identify integer ratio scales, in composition, and even as instruments or backend processors for dedicated interface hardware. However, my initial effort ( 1 ) fell short, both for being poorly conceived and for being incapable of producing sound; it only calculated numbers.
After becoming convinced that there was no way not involving exhorbitant effort to synthesize sound from a web page, I switched to programming for Mac OS X, and eventually produced a working program ( 2 ) that both better represents the theory of ratio based music and produces sound.
Having gotten that far with it, I took a deep breath and set the project aside until such time as I had fresh enthusiasm for it, but continued to study Mac OS X and learned something about the best practices of Mac OS X programming, largely ignored in the program linked above.
I also continued to think through the basics, how computing could play a pivotal role in ratio based music, and to express those thoughts in code, starting over many times but never arriving at a sufficiently clear vision to warrant the effort of following through to a completed application.
That's the current state of the project. I know a lot more about programming, and have a jumble of ideas which may or may not add up to anything.
So the first order of business here will be to express some of those ideas as clearly as I can manage, and maybe in the process I'll see how they might fit together.
Don't be surprised if it takes some time to amount to anything...
Just Intonation
In a nutshell, Just Intonation is a theory of music, developed by Harry Partch and a handful of others, which makes use of scales composed of tones, the frequencies of which are related by small integer ratios. This might sound complicated, but compared with the standard scale, which is composed of tones related by powers of the twelfth root of two (an irrational number), it's actually pretty simple.
To learn more about Just Intonation, please visit the website of The Just Intonation Network.
To learn more about Just Intonation, please visit the website of The Just Intonation Network.
Subscribe to:
Posts (Atom)