Csound provides a large number of opcodes designed to assist in the distribution of sound amongst two or more speakers. These range from opcodes that merely balance a sound between two channel to ones that include algorithms to simulate the doppler shift that occurs when sound moves, algorithms that simulate the filtering and inter-aural delay that occurs as sound reaches both our ears and algorithms that simulate distance in an acoustic space.
First we will look at some 'first principles' methods of panning a sound between two speakers.
The simplest method that is typically encountered is to multiply one channel of audio (aSig) by a panning variable (kPan) and to multiply the other side by 1 minus the same variable like this:
aSigL = aSig * kPan aSigR = aSig * (1 – kPan) outs aSigL, aSigR
where kPan is within the range zero to 1. If kPan is 1 all the signal will be in the left channel, if it is zero all the signal will be in the right channel and if it is 0.5 there will be signal of equal amplitide in both the left and the right channels. This way the signal can be continuously panned between the left and right channels.
The problem with this method is that the overall power drops as the sound is panned to the middle.
One possible solution to this problem is to take the square root of the panning variable for each channel before multiplying it to the audio signal like this:
aSigL = aSig * sqrt(kPan) aSigR = aSig * sqrt((1 – kPan)) outs aSigL, aSigR
By doing this, the straight line function of the input panning variable becomes a convex curve so that less power is lost as the sound is panned centrally.
Using 90º sections of a sine wave for the mapping produces a more convex curve and a less immediate drop in power as the sound is panned away from the extremities. This can be implemented using the code shown below.
aSigL = aSig * sin(kPan*$M_PI_2) aSigR = aSig * cos(kPan*$M_PI_2) outs aSigL, aSigR
(Note that '$M_PI_2' is one of Csound's built in macros and is equivalent to pi/2.)
A fourth method, devised by Michael Gogins, places the point of maximum power for each channel slightly before the panning variable reaches its extremity. The result of this is that when the sound is panned dynamically it appears to move beyond the point of the speaker it is addressing. This method is an elaboration of the previous one and makes use of a different 90 section of a sine wave. It is implemented using the following code:
aSigL = aSig * sin((kPan + 0.5) * $M_PI_2) aSigR = aSig * cos((kPan + 0.5) * $M_PI_2) outs aSigL, aSigR
The following example demonstrates all three methods one after the other for comparison. Panning movement is controlled by a slow moving LFO. The input sound is filtered pink noise.
EXAMPLE 05B01.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 10 nchnls = 2 0dbfs = 1 instr 1 imethod = p4; read panning method variable from score (p4) ;generate a source sound================ a1 pinkish 0.3; pink noise a1 reson a1, 500, 30, 1; bandpass filtered aPan lfo 0.5, 1, 1; panning controlled by an lfo aPan = aPan + 0.5; offset shifted +0.5 ;======================================= if imethod=1 then ;method 1=============================== aPanL = aPan aPanR = 1 - aPan ;======================================= endif if imethod=2 then ;method 2=============================== aPanL = sqrt(aPan) aPanR = sqrt(1 - aPan) ;======================================= endif if imethod=3 then ;method 3=============================== aPanL = sin(aPan*$M_PI_2) aPanR = cos(aPan*$M_PI_2) ;======================================= endif if imethod=4 then ;method 3=============================== aPanL = sin ((aPan + 0.5) * $M_PI_2) aPanR = cos ((aPan + 0.5) * $M_PI_2) ;======================================= endif outs a1*aPanL, a1*aPanR; audio sent to outputs endin </CsInstruments> <CsScore> ;4 notes one after the other to demonstrate 4 different methods of panning ;p1 p2 p3 p4(method) i 1 0 4.5 1 i 1 5 4.5 2 i 1 10 4.5 3 i 1 15 4.5 4 e </CsScore> </CsoundSynthesizer>
An opcode called pan2 exist which makes panning slightly easier for us to implement simple panning employing various methods. The following example demonstrates the three methods that this opcode offers one after the other. The first is the 'equal power' method, the second 'square root' and the third is simple linear. The Csound Manual alludes to fourth method but this does not seem to function currently.
EXAMPLE 05B02.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 10 nchnls = 2 0dbfs = 1 instr 1 imethod = p4; read panning method variable from score (p4) ;generate a source sound==================== aSig pinkish 0.5; pink noise aSig reson aSig, 500, 30, 1; bandpass filtered aPan lfo 0.5, 1, 1; panning controlled by an lfo aPan = aPan + 0.5; offset shifted +0.5 ;=========================================== aSigL, aSigR pan2 aSig, aPan, imethod; create stereo panned output outs aSigL, aSigR; audio sent to outputs endin </CsInstruments> <CsScore> ;3 notes one after the other to demonstrate 3 methods used by pan2 ;p1 p2 p3 p4 i 1 0 4.5 0; equal power (harmonic) i 1 5 4.5 1; square root method i 1 10 4.5 2; linear e </CsScore> </CsoundSynthesizer>
3-D binaural simulation is availalable in a number of opcodes that make use of spectral data files that provide information about the filtering and inter-aural delay effects of the human head. The older one of these is hrtfer. The newer ones are hrtfmove, hrtfmove2 and hrftstat. The main parameters for controlfor the opcodes are azimuth (where the sound source in the horizontal plane relative to the direction we are facing) and elevation (the angle by which the sound deviates from this horizontal plane, either above or below). Both these parameters are defined in degrees. 'Binaural' infers that the stereo output of this opcode should be listened to using headphones so that no mixing in the air of the two channels occurs before they reach our ears.
The following example take a monophonic source sound of noise impulses and processes it using the hrtfmove2 opcode. First of all the sound is rotated around us in the horizontal plane then it is raised above our head then dropped below us and finally returned to be straight and level in front of us.For this example to work you will need to download the files hrtf-44100-left.dat and hrtf-44100-right.dat and place them in your SADIR (see setting environment variables) or in the same directory as the .csd.
EXAMPLE 05B03.csd
<CsoundSynthesizer> <CsOptions> -odac ;activates real time sound output </CsOptions> <CsInstruments> ;Example by Iain McCurdy sr = 44100 ksmps = 10 nchnls = 2 0dbfs = 1 giSine ftgen 0, 0, 2^12, 10, 1 giLFOShape ftgen 0, 0, 131072, 19, 0.5, 1, 180, 1 ;U-SHAPE PARABOLA instr 1 ; create an audio signal (noise impulses) krate oscil 30,0.2,giLFOShape; rate of impulses kEnv loopseg krate+3,0, 0,1, 0.1,0, 0.9,0; amplitude envelope: a repeating pulse aSig pinkish kEnv; pink noise. pulse envelope applied ; apply binaural 3d processing kAz linseg 0, 8, 360; break point envelope defines azimuth (one complete circle) kElev linseg 0, 8, 0, 4, 90, 8, -40, 4, 0; break point envelope defines elevation (held horizontal for 8 seconds then up then down then back to horizontal aLeft, aRight hrtfmove2 aSig, kAz, kElev, "hrtf-44100-left.dat","hrtf-44100-right.dat"; apply hrtfmove2 opcode to audio source - create stereo ouput outs aLeft, aRight; audio sent to outputs endin </CsInstruments> <CsScore> i 1 0 60; instr 1 plays a note for 60 seconds e </CsScore> </CsoundSynthesizer>