Tools for Sound Processing
143
Signal Processing Toolbox
Unit Generators (UG)
patch
instrument
orchestra
parameters
score
notes
[L,R,format] = wavr16('ma1.wav');
S = L/2;
for i=1:length(L)-1
S(i) = (L(i) + L(i+1))/2;
end;
The code turns out to be less compact but, probably, more easily under-
standable. However, the running time is significantly higher because of the for
loop.
In the Matlab environment, there is a collection of functions called the Signal
Processing Toolbox. In the examples of this book we do not use those functions,
preferring public-domain routines written for Octave, possibly modified to be
usable within Matlab. One such function is function is stft.m, that allows to
have a time-frequency representation of a signal. This can be useful for time-
frequency processing and representation, as in the script
SS = stft(S);
mesh(20*log10(SS));
whose result is a 3D representation of the time-frequency behavior of the
sound contained in S.
B.2
Languages for Sound Processing
In this section we briefly show how sounds are acquired and processed using
languages that have been explicitely designed for sound and music processing.
The most widely used language is probably Csound, developed by Barry Ver-
coe at the Massachusetts Institute of Technology and available since the middle
eighties. Csound is a direct descendant of the family of Music-N languages that
was created by Max Mathews at the Bell Laboratories since the late fifties.
In this family, the language of choice for most computer-music composers be-
tween the sixties and the eighties was Music V, that established a standard in
symbology of basic operators, called Unit Generators (UG).
According to the Music-N tradition, the UGs are connected as if they were
modules of an analog synthesizer, and the resulting patch is called an instrument.
The actual connecting wires are variables whose names are passed as arguments
to the UGs. An orchestra is a collection of instruments. For every instrument,
there are control parameters which can be used to determine the behavior of
the instrument. These parameters are accessible to the interpreter of a score,
which is a collection of time-stamped invocations of instrument events (called
notes). Fig. 2 shows a schematic description of how Music-V-like languages work:
a) is a Music-V source text
4
while b) is its graphical representation. The or-
chestra/score metaphor, the decomposition of an orchestra into non-interacting
instruments, and the description of a score as a sequence of notes, are all design
decisions which were taken in respect of a traditional view of music. However,
many musical and synthesis processes do not fit well in such a metaphorical
frame. As an example, consider how difficult it is to express modulation pro-
cessing effects that involve several notes played by a single synthesis instrument
(such as those played within a single violin bowing): it would be desirable to have
4
picked up from [56, page 45]
Next Page >>
<< Previous Page
Back to the Table of Contents