XArt Online Journal

Exploding Art Music Productions 

3.2: Histories and directions of music technology

Andrew R. Brown

When considering contemporary music technologies we commonly imagine electronic technologies, particularly the synthesizer, computer, and sound recording devices. These devices are products of the post industrial age in which we now live, an age that focuses on data processing and in which the sonic and structural boundaries of music have been exploded. In such an environment it is all to easy to be overwhelmed by the apparent choice of sounds and ways of dealing with them. Although we need no longer take refuge within the viewable boundaries of orchestral instrument timbres, common music notation, or classical forms, by simply taking a moment to reflect on the history that produced them and now leads to todays instruments and forms, we can observe that in each age musical sense has been made of the technologies. A historical context also humbles us that our time is not a peak of such development but a unique point along the meandering path of human desire to create meaning and expression from our world and abilities. This article both signposts the history of music technologies and provides some clues about making sense of the future.

I will begin by providing potted histories from three view points, firstly, some of the major technological landmarks in western music history, secondly, an overview of automated music leading to modern recording and algorithmic composition, and thirdly, a brief look at the development of electronic music technology. Having set these scenes, I will discuss what attributes might be seen as persistent and thus may deserve our attention in making a difference in the future.

Pre-electronic technologies

Human history displays a general continuity of technological influence, with large steps forward occurring at times such as the renaissance, the industrial revolution, and the information age. This pattern is supported by a relatively constant history of music instrument development. The earliest harps, horns, and drums are clearly technologies and their usage could be characterised very similarly to the way computers are used for music production  in our age. Indeed the ancient Greeks such as Pythagoras could be said to have been more fascinated with the technological nature of music than we are even today. Other landmarks include the use of written notation from around 500, and organ building improvements and equal temperament in the middle ages. The renaissance provided an obsession with music of the spheres resulting from the newly developed field of astronomy, and a peaking of craftsmanship in the violins of Stradivarius and the Fugues of Bach's. The study of alchemy lead to 19th century chemistry  and physics providing metals for instrument fabrication and efficient methods of manufacture.  The surge in instrument development went hand in hand with increases in orchestra size and technology was even the underscoring  theme of music such as Wager's "Des Nibelungen."  Early twentieth century landmarks include the automation of music via the player piano,  the technological abstractions of electronic and recorded sound, and a parallel abstraction in the musical structures and notations of Debussy, Stravinsky, Schoenberg, and Cage .

This history is continuous in its curiosity and creativity, but not deterministic or simply one of increasing complexity. For example, the interests of Pythagoras could be said to closely align with uncovering musical patterns in astrological movements in the renaissance and with Xenakis' stochastic investigations in the second half of the 20th century, in between were centuries of investigations down other technological paths. The path of technological development is in no way straight, or predictable in advance. But it is noticeable as it occurs.

Automated music

The oldest working automated instrument, according to Thomas Levenson, was a barrel organ of Henry VIII's built in 1502. It  was manually driven, but over the course of the following century lead to fully autonomous instruments driven by clockwork mechanisms.

In order to increase the repertoire  barrel organs had to have replaceable barrels which were expensive to produce and playing time was limited. A solution to both these problems presented itself in the eighteenth century using technologies employed by Jacquard weaving looms.  Scores were made in the form of holes punched in paper tape or cards. These punch cards became a new form of musical notation, a notation not efficient for human reading but quite efficient for machine reading. The machine was the performer. Such instruments constitute more than an amusement even if their quality of performance was quite low. They enabled musical performances to be captured and transported, to be reproduced on demand, and replayed time and again for closer inspection. Indeed the current popularity of soloist and duo in bars singing to CD and MIDI file backings is not so far removed from the Barrel Organ driving busker still found on the streets of Paris.

Perhaps the most sophisticated, certainly the most popular, automated instrument was the player piano. Although its development historically paralleled the gramophone its sonic quality was far superior for quite a while, and brought music on demand into many homes in the first half of the twentieth century. The availability of automated musical performances in the home changed the role of audience, effecting ó not always detrimentally ó concert attendance and the social status of musical performance skills. The player piano, more than electronic recording technologies, was the parent of MIDI in choosing to capture pitch, duration, and force (velocity) for each note. The piano roles were editable and so 'near perfect' performances could be created, and composers were not slow to realise that roles could be produced to humanly unperformable music. In this way the composer first became producer, involved in all the steps from conception to final sounding.

The Stochastic music (determined by probabilities, and sometimes referred to as chance music) of Xenakis, though not properly described at automated music, is deterministic, and outlines one of the most extensive attempts to produce a music from a standpoint which is not only mechanistic but with technological aesthetic. Xenakis wrote his early music in the 1950s for acoustic instruments, calculating the probabilities by hand, using math to determine elements of pitch, duration, dynamic and structure. Beginning in the 1960s he turned to the computer to assist in both element determination and sound synthesis. Xenakis applied his stochastic perspective to music from the micro-structure of sound to the macro-structure of form from.

Computer-based music composition had its start in the mid 1950s when Lejaren Hillier and Leonard Isaacson did their first experiments with computer generated music on the ILLIAC computer at the University of Illinois. They employed both a rule-based system utilising strict counterpoint, and also a probabilistic method based on Markoff (sic) chains (also employed by Xenakis). These procedures were applied to variously to pitch and rhythm resulting in 'The ILLIAC Suite' a series of four pieces for string quartet published in 1957.

The recent history of automated music and computers is densely populated with examples based on various theoretical rules from music theory and mathematics. While The ILLIAC Suite used known examples of these, developments in such theories have added to the repertoire of intellectual technologies applicable to the computer. Amongst these are the Serial music techniques, the application of music grammars (notably the General Theory of Tonal Music by Fred Lerdahl and Ray Jackendoff), sonification of fractals, and chaos equations, and connectionist pattern recognition techniques based on work in neuropsychology and artificial intelligence.

Arguably the most comprehensive of the automated computer music programs is David Cope's Experiments in Music Intelligence (EMI), which performs a feature analysis on a data base of coded musical examples presented to it, and can then create a new piece which is a pastiche of those features. EMI works at many levels of detail and convincingly captures both the note by note melodic character,  and the textural and structural characteristics of the works in its database.

Electronic Music Technologies

After Thaddeus Cahill's relatively unsuccessful attempt at creating a massive organ-like device using early American telephone technologies called the Telharmonium, one of the first electronic performance instruments was Leon Thérémin's device invented in the 1920s in Moscow. The Theremin, as it was known, was played by situating one's hands at varying distance from two antennae. The location of the hands changed the electromagnetic fields generated by electricity passing through the antennae, one controlling volume the other the pitch of a constant  and relatively pure tone. The Theremin made quite an impact with pieces being written for it by Aaron Copeland and Percy Grainger, although the most popularly known example is in the opening of the Beach Boys' hit 'Good Vibrations.î

The first popular electric keyboard instrument was the Hammond organ, invented in 1935, by Laurens Hammond using electromagnetic components to generate sinusoidal waveforms which could be combined in various combinations using drawbars. The drawbars acted like pipe organ stops, but rather than simply turning on or off  oscillators, they controlled degrees of loudness. The B3 model, first produced in 1936, has become legendary in gospel, jazz and rock musics. It provided a relatively affordable and portable keyboard instrument for music performance, and the timbral variety 'synthesised' through drawbar settings gave to keyboard players a taste of customisable timbre which would later be expanded by the synthesizer.

The solid body electric guitar was developed after some initial production with semi-acoustic electric models in the 1930s. Following early experiments by Les Paul the first production models appeared in the early 1950s from the Gibson and Fender companies. The major technical hurdle being the refinement of the pickups to eliminate noise and provide a clear signal, solved significantly by the development of the twin-coil 'humbucking' pickup.

The early development of recording technologies by Thomas Edison were done with mechanical technologies around the turn of the twentieth century. It was not until electronic amplifiers became available in the form of vacuum tubes that the minute etchings of the recording process could be played back with any fidelity. Even then the making of recorded cylinders was tedious and specialised. Building on this research the first commercial magnetic tape recorder was introduced in 1948. The ability to record, not only playback,  was the shift necessary to motivate musicians to use this technology creatively.

In Paris, after the second world war, Pierre Schaeffer developed a compositional use for the previously reproduction focused, tape recorder. Musique conrète, as it became known, used recorded sounds of both instrumental and environmental origin, and manipulated them through varying the pitch, duration, and amplitude, then assembling them in a polyphonic musical form.

Similar tape based work was produced by Karlheinz Stockhausen in Cologne from the mid 1950s, which he called Electronische Musik (Electronic Music). As well as treating recorded sounds Stockhausen's focus was on synthesising new timbres using oscillators, filters, and amplifiers.

The successful commercialisation of synthesizers was the release in 1964 of the Moog synthesizers. The technical breakthrough which made this possible was the use of transistors instead of vacuum tubes, which reduced the size and cost, while increasing reliability through the technique of voltage control. One of the more popular early recordings using the Moog synthesizers was Wendy Carols's 'Switched on Bach which was a notable achievement at the time, but created a legacy of imitative thinking which still haunts synthesizer usage, as propagated in the General MIDI specification.

The use of recording as a compositional and synthesis tool did not change much from the days of Musique Conrète until the late 1970s when the development of the Fairlight CMI in Australia by Peter Vogel and Kim Ryrie,  and the New England Digital 'Synclavier developed in New Hampshire by Sydney Alonso, Jon Appleton, and Cameron Jones. Both these instruments introduced sampling technologies (short duration digital recording) to commercial music making in 1979. Both instruments were also capable of sound synthesis processes and used keyboard controllers for performance, attached to computer systems for storage, display and editing of waveforms.

Digital technologies made their way in to synthesizers first as memory banks for presents, most famously in the Sequential Circuits 'Prophet V', and later in the sound synthesis engine itself notably with the Yamaha 'DX7.î

The release of the DX7 in 1983 coincided with another significant event in electronic music history, the introduction of the MIDI standard. Developed first by Dave Smith of Sequential Circuits, then with input from other major manufactures of the time, notably Roland and Yamaha, the MIDI standard replaced the plethora of interconnecting standards such that equipment from different manufactures could communicate. MIDI began as a note-based live performance protocol, intellectually indebted to music notation and the player piano  technologies, and while still based around note-based live performance MIDI has expanded to include digital representations of other musical and operational parameters.

The synthesizer, in its keyboard form, has remained quite stable since the 1980s, with some extensions to other instrument modes such as guitar, woodwind, and percussion controllers. There is also continued research into new instrument design, as there has always been, with STEIM and the HyperInstrument group at MIT's Media Lab being amongst the cutting edge.

The 1980s also saw the increase in personal computer ownership, and with it the expansion of music software. Most significantly from a commercial aspect was the rise and rise of the MIDI sequencer. Building on the techniques of earlier electronic sequencers to repeat a short series of notes, software sequencers became more and more comprehensive.

Alongside sequencing, music notation programs were also appearing at this time, although it took the desktop publishing revolution of the early 90s for all the appropriate technologies to be fall into place, notably postscript printing. Computer music publishing is now the norm rather than the exception. The first program to successfully combine both sequencing and notation was C-Lab's Notator on the Atari computer, which proved the rule that you only need one 'must have' program to sell a computing platform.

As personal computer power increases in the late 1990s, synthesis software (long the domain of expensive systems such as the Fairlight, or computer workstations) is becoming accessible. This is evident in the current popularity of hard disk recording systems, such as Avid's Protools, as well as real-time signal processing systems which are becoming practical on home computers for reverb and equalisation, and even real-time synthesis as complex as frequency modulation, granular, and physical modelling.

The integration of composing, recording, publishing, and multimedia facilities on computer is just the latest step in providing expressive tools for the musician. And as with all musical instruments, there is a learning curve to realise the expressive potential.

Layers of persistence

With the virtue of hindsight it becomes clear that there are features of music technologies which reappear, and others which are transient. Such feature of technology I describe as being differently persistent. The most persistent of all features seem to me to be the trend to always attempting to turn scientific (technological) discoveries to creative sonic use. This applies equally to the use of new metallic substances for horns, through to weaving loom automation for driving  automated organs. What follows after this experimental stage is a persistent need to make musical sense of this newly acquired sound making. Extended virtuosity on  improved brass instruments, for example, followed each technical improvement in construction, conversely, finding an other than economically worthwhile application of automated organs, pianos, or Karioke machines, should provide an obvious trend by now about the place of passive musical technologies.

A less persistent, and thus less influential, aspect of  music's  history is the fact that while technological changes resulted in new musical styles, technology itself is not something which effects some styles and not others. This is different from saying it is astylistic, which it is not, each technology is biased toward particular aesthetic outcomes ó that feature is persistent. The fact that violin development effected baroque music, while saxophone development effected jazz, and the tape recording created musique concrète, says nothing about those styles being more or less prone to technological influence, but is simply a matter of historical context. Therefore technological development is minimally determined by style, not persistent in relation to a style. Generally a stylistic aesthetic is established at a time of some technological development, and the aesthetic is persistent, while the technologies effect is not.

Another area where a difference in persistence can be seen is the difference between a change in medium and a change in resolution of a medium. Mediums are persistent, while resolution is not. Thus changes in mediums can be significant, while changes in resolution may not. This is somewhat conter-intuitive because most technological progress is a change in resolution; audio quality of tape recording is improved,  the resolution of the television screen becomes more fine, the speed of the computer is increased, and so on. Such changes are changes in resolution, improvements in efficiency. Beyond a threshold where resolution is adequate, further increases not a good measure of improved capacity.

For example, the MIDI handling capabilities of computers today are minimally improved from five years ago, despite considerable advances in  computing capacity. On the other hand, the shift from sending printed scores in the post, to emailing score files, is a change in medium which is likely to significantly alter the distribution of musical scores. Much more than the an incremental improvement in printing quality. The shift from paper to digital medium is significant, and its effects will be persistent.

A more historical change in medium was the shift from slide trumpets to valve trumpets. A valve was not a more fancy slide, it was a revolutionary change not a 'resolutionary' change. Another more contemporary medium change is the shift from tape recording to hard disk recording, the significance  is two changes in medium. Firstly from  analogue to digital, which provides increased dynamic range and improved signal-to-noise ratio, but  may  reduced frequency range. Second is the shift from linear to random access of the recorded information. On tape, the sound is accessed serially as we hear it, we can cut and splice the tape but it is a tedious process, with hard disk recording any moment of sound can follow any other,  the full ramifications of this change are yet to be felt.

Paying attention to the persistence, or long term influence, of aspects of technology is important to noticing significant changes, and ignoring insignificant ones. In this way we should not become overwhelmed by contemporary technological change, but see in it the aspects with persistence worth our attention, and focus on making sense and meaning of those changes.

And so history continues, and we will continue to seek new ways of seeing, or should I say hearing,  the world through our technologies. As musicians we will strive to make sonic sense of those new ways of hearing, always mindful of our heritage and what it tells us, yet looking forward to creating a new musical heritage.

Bibliography:

Dobson, R. 1992. A Dictionary of Electronic and Computer Music Technology. Oxford: Oxford University Press

Levenson, T. 1994. Measure for Measure: A Musical History of Science. New York: Touchstone

Schrader, B. 1982. Introduction to Electro-acoustic Music. Englewood Cliffs: Prentice-Hall

Schwanauer, S. M. & Levitt, D. A. (Ed.s) 1993. Machine Models of Music. Massachusetts: The MIT Press


XArt Contents

Exploding Art Home Page

©1997-2000 Exploding Art Music Productions