this week’s excuse for not getting into the studio is brought to you by: a gastrointestinal virus! seriously, there is nothing more disturbing that having to hold a one year old while he wretches until empty. that was sad in a deep way. my catching it was not only inevitable, but painful. so five days came and went and i’m not really sure what happened.
in lieu of useful musical goodness, i will instead post a little bit of a thought piece i’m working on. my general thesis is: do the tools we use to make music make too many decisions for us? in other words, does someone who uses a particular piece of software for recording, composing or editing gravitate to certain styles or characteristics because the tool being used makes it more difficult or much simpler to do what the composer would do otherwise? this might only be of interest to me and it might be that way simply because i use a million different pieces of software to do what i do. but is that because i am not particuarly drawn to any one environment or is it because i use what i need to use to do what i need to do? i see in many forums people who live and die by one package. that just can’t be healthy for the user or for the art. all of that said, here is my introductory sketch for what will likely be a silly essay that a dozen people might read.
Introduction
When electronic music came into being, there were very few places in the world with the equipment necessary to produce it. Those with access to these studios were those who had enough background in very specific disciplines. Every piece was an experiment of some sort. While thought was given to the composition of the work, it must be said that perhaps more of the effort and inspiration went into devising the methods of producing the work. After all, these composers were building their entire tonal palette from scratch with each work. With the technology of electronic music moving as quickly as it was, there was always something new to learn and the studio was more of a laboratory. The level of detailed knowledge was incredible.
Computers quickly changed the game. There was still a lot to know and a different set of skills to acquire, but as microcomputers came onto the scene the barrier to entry was lowered drastically. More and more people could create music with the slowly but steadily growing number of pieces of software and hardware for making sound. Slowly but surely, digital audio became a stock component of even the lowest cost personal computers.
With the hardware readily available, the software followed. As the computer allowed the composer to step away from physical patch cables and oscillators, it also took on more and more responsibility for the details of sound creation. This came at a price. After all, if the software that one is using takes care of the low level items such as setting the sample rate where should its responsibilities and functionality end? To see the opposite ends of the continuum, one need only compare Csound and its orchestra and score text files to GarageBand’s “Magic GarageBand†feature that actually creates a song for the user.
This is where the questions begin and the discussion takes flight. The composer has taken necessary steps away from the technical side of producing electronic/computer music. No longer is it necessary to understand the mathematics behind sample rates or the various forms of synthesis. A computer music composer doesn’t even need to know how to program a computer. But how far away from the inner workings of the computer is too far and to what extent do our tools make that decision for us?
0 Comments.