Skip to content

Advice for your final projects

Write Art Music, Not Pop Music

Your two pieces can be any sort of electroacoustic music. I understand there is some confusion about what this actually means. Electroacoustic music is electronically-based experimental art music. Electronic dance music or other pop musics (including bands like Autechre) are NOT electroacoustic. If you write pop music, no matter how good, you will get a poor score, so don’t do this. Write art music instead.

In order to ensure that your piece sounds like art music, avoid 4/4 rhythms. You could try irregular rhythms, stochastic placement sof events, random emphasises on a regular rhythm, or do groups of 2, 3 or 4 beats that change frequently, so the rhythmic centre of your piece is shifting or unstable. It is not necessary for your piece to have a rhythmic component.

Also, be aware of popular chord progressions and try to avoid them. One way to do this is to use a tuning system other than 12 tone equal temperament. If you do use 12ET, then pick different modulations than you would find in pop.

A lot of synth pads and other instruments in Logic have a very pop-sounding timbre. Be careful of this. You might try instead FM synthesis that creates less enharmonic timbres.

Note that there is nothing intrinsically wrong with using these elements in an electroacoustic piece and if you should feel free to explore them in future work, but for now, be very careful and/or avoid things that might be too poppy.

Development and Structure

Maybe you’ve come up with a sound that you love and you could just listen to it all afternoon and be happy. Alas, composers cannot love their sounds too much. It needs to be going somewhere or doing something. Even in drone pieces, the sound can’t just stay static, it needs to be changing in some way or making some kind of gesture.

Your piece should have a beginning, a middle and an end. Often, it’s best to write the end first, then the start and then the middle. You may want to have multiple sections in the middle. After you have all your sections, you may need to compose bits to glue them together into a coherent piece.

Put things into groups. If a sort of sound is worth having once, it’s probably worth having twice. Or a sound like it could come back. However, beware of exact repetition. Change something on the second or third time that it comes back. In the same way, you should repeat gestures, and don’t get overly predictable. If the listener thinks she knows what’s going to happen, do something unexpected.

One way to structure a piece is to make up a story about the sound or the listener. Is the sound having an transformative experience? Is the listener going someplace?

Sound can have roles or functions. Is this sound in the foreground? Is it in the background? Is it doing something major? Is it commenting on something that just happened? Is it foreshadowing something that’s about to happen?

Your piece should change as it goes. You may want to have quiet bits and loud bit or dense bits and sparse bits. You may want to have gradual transitions or dramatic ones or some of each sort.

Much of the above is suggestions and not rules, however, for this piece, you MUST have a beginning, a middle and an end and your piece MUST develop and change in some way rather than remaining static. There are pieces that don’t follow these rules and you should feel free to explore that in future work, but for now, following those two rules will improve your grade.

Advertisements

Task 1

Remember that the module guide directs the following:

Each task must be accompanied with a short critical report stating clearly (use these headings):

  1. what your objective was in creating the music for your task
  2. what well known composer’s approach you decided to adopt and how you
    modified it to suit your piece. Or, if you feel your approach is original, then explain what makes it so. Please reference the films on DVD as shown in the ARU Harvard referencing style.

  3. Note one particular audio technique you applied in this task which you had not applied before
  4. Note one particular music technique you learned through doing this task
  5. In your view, how is this task an example of practice as research?
  6. Autoevaluation: what mark would you give yourself for this task and why?

It’s fine to make other notes about the topic in your blog and to do additional posts, but for your task writeups, don’t forget to include these headings and put something underneath them. If, for some reason, one of them does not apply to a particular task, please note that.

Also, only a small number of you have emailed me links to your blogs. The VLE appears to be offline this weekend, so if more of you posted your blog to the discussion forum, please also email me the link. And, if you have emailed me your link, please also post it to the discussion forum, so you can checkout each other’s blogs.

Musique concrète

The wikipedia definition for electroacoustic music is “music resulting from the manipulation of recorded or generated sound.” This describes everything currently available in the iTunes store, and thus is not a useful definition. Instead, it is music that is historically or artistically linked with past experimental electornic music, such as happened with Musique concrète in Paris or Elektronische Musik in Cologne.

Musique concrète was devised by Pierre Schaeffer, who presented the first ever concert of this new type of music in 1948 with five short pieces called cinq études de bruits (Five Noise Studies). He was influenced by the futurists and used recordings of sounds and noises. We listened to one of these: Etude Aux Chemins de Fer, which used sounds recorded from trains.

The first pieces of Schaeffer were produced using disks, as he did not yet have access to tape. He did have quite a lot of access to advanced studio gear of the time, as he was working for Radio France (RTF).

In 1949, he started to collaborate with Pierre Henry, another RTF employee, and they created a piece called Symphonie pour un homme seul (Symphony for a man alone). They were supported by RTF in this and had better access to technology and began to process the sounds. They split their sounds into two categories: human and non human.

In 1952, Schaeffer wrote a treatise on sound objects. A sound object is the sound itself, not the instrument that produced the sound or the tape that holds it, but the vibrations in the air. He proposed that sounds have several properties, by which they can be categorised:

  • Mass – which is the spectral dimension
  • Dynamics
  • Timbre
  • Melodic profile – spectral changes over time
  • Grain – irregularities (like film grain)
  • Pace

It would be possible to do a sort of concrete serialism with these properties, but he was not an adherent of serialism. An essential part of Musique Concrète was that the musical form was derived from the sound material, rather than sounds being put into a pre-existing structure.

He proposed something he called “acousmatics,” where the sound object is a thing itself, divorced from the context that produced it. He also put forth the idea of reduced listening, where the listener hears only the sounds and not the imagined source.

We discussed whether this was possible and some of you said that if you listened to a sound repeated enough, like if you were working on it, it gradually lost its associations, but this doesn’t tend to happen without repetition. Trevor Wishart notes that psychological research shows that humans have a strong tendency to assign sources from sounds.

Radio France was very keen on Musique concrète and funded Groupe de Recherche de Musique Concrète, which eventually turned in to Groupe de Recherches Musicales (GRM), which still exists now.

In 1952, Stockhausen when to Paris to study and did some short studies in Schaeffer’s studio. Then he went back to Germany and was involved with the WDR studio in Cologne, which opened in 1953. There was a major rivalry between Paris and Cologne. The Germans were doing Elektronische Musik, which was largely additive synthesis.

However, in 1964, Stockhausen did some experiments with recording sounds and processing them. He used a tape recorder, a filter and a tamtam (a gong). He tried using different kitchen implements on the tamtam while holding a microphone very close to it. Meanwhile, an engineer was adjusting a filter and recording the sounds Stockhausen was making on the tamtam. When they played back the tape the were amazed at the sounds they had gotten.

Stockhausen then wrote a score for different sorts of sounds he hoped could be gotten from a tamtam and wrote a trio for a percussionist, a microphonist and somebody to adjust the filtering. The piece was played live with all three persons on stage. He wrote, “The microphone is no longer a passive tool for high fidelity reproduction: it becomes a musical instrument influencing what it is recording.”

This piece was called Mikrophonie and is on a CD in the library, which you should listen to. It is also the basis for Task 1, which is discussed below.

Another piece that uses very close miccing is Concrete PH by Xenakis, which used microphones stuck in with burning coals and embers. He created this piece for the Philips Pavillion at the 1958 Worlds Fair, something discussed in much greater detail in week2.

More Recent Musique concrète

In 1970, French composer, Luc Ferrari did a piece called Preque Rien (Almost Nothing), using recordings of a day at the beach. He overlaps the sound events to create a compressed picture of what the day sounded like. This piece is on a CD in the library and you should listen to it.

In 1989, New Zealand composer, Annea Lockwood did a piece called Soundmap of the Hudson River, which records sound samples from the very start of the Hudson river to where it empties into the Atlantic by New York City.

Listening

Reading

  • Homles, Thom, . 2008. Electronic and Experimental Music Third Edition. Routledge: New York. (p 45-78 and p 91-100)
  • Cox, C. & Warner, D. 2004. Audio Culture: Readings in Modern Music. Continuum: New York.(Chapter 14, p.76, Acousmatics, by P. Schaeffer)
    • Task 1

      You use close microphone placement and tape transformations of the sort available in the 1950s (forwards, backwards, sped up, slowed down, cut up, plate-style reverb) to explore quiet, hidden sounds. For example, by scratching or striking metallic objects, plucking the teeth of a comb, etc.

Advice for your reports

First of all, there is still time left to go to Julio’s office hours, which is something that I recommend. Also, next Wednesday, I can have a look at your work in progress during the normal class time.

The paper you’re writing is a report on your piece. It is not a history of (for example) dubstep, nor is it about your opinion of the history of dubstep, nor a discourse on your favourite dubsetp pieces. Instead, just talk about the pieces, history and writing that directly influenced/informed the piece that you wrote. Extra navel-gazing and/or context is not required.

Don’t quote long blocks of text. If you’re influenced by a paper that has a section about “rhythmic intensity,” you can quote that phrase, if it’s relevant. Keep the quotations short and to the point.

Obviously, any time you copy a phrase directly from another source, it has to have quote marks around it and be cited. Paraphrasing also requires citation. Also keep these short and to the point.

Any factual claims that you make must have citations.

Be specific. Don’t just say what artists influenced you, but also what tracks. If you tried to emulate a great transition at 1:30 in a particular track, you can mention it specifically.

Mention the title of your piece in the title of your paper.

Your paper needs some structure to it. Each bullet point in the module guide should correspond to a section of your paper.

The graph of your piece’s structure should be relatively simple and notate the form of your piece with timings. This is something like: intro, A1, B1, break, B2, bridge, A2, outro, in graph form and with timings listed.

When you are discussing your samples, you need to list all of them, not just what libraries you pulled them from. “I used the kick, snare and hihat from the Apple Loop Library . . ..”

You don’t get extra points for using extra words. You almost never need to say things like “very” or “quite.”

“I feel that” and “I think that” are rarely going to be required in a report like this one. Replace “I feel that Walk Like An Egyptian (The Bangles, 1986) is a very good example of a piece that uses whistling” with “Walk Like An Egyptian (The Bangles, 1986) uses whistling.” or “I was inspired to use whistling by Walk Like An Egyptian (The Bangles, 1986).”

Try to minimize passive voice. Replace “An echo effect is being used on track 1.” with “I used an echo effect on track 1.” Replace “This kind of effect has been used by tracks such as XYZ by ABC (2010) and UVW by DEF (2009).” with “XYZ by ABC (2010) and UVW by DEF (2009) both use this effect.” Anywhere you see phrases like “is being,” that’s passive voice.

Many of you seem to have occasional sentence fragments. Sentences need to have a verb in them and can’t just be a dependant clause.

Have somebody else read your paper. Give them a copy of the module guide when they do it.

The best thing to do is go to Julio’s office hours. Or you can email me a copy of the paper. Also, I’ll be there on Wednesday.

Data Bending with Audacity

Data Bending is a technique where one type of data is treated as if it is another type of data. In this example, we’ll be treating data as if it is audio. Some types of data may make very interesting audio.

Audacity is a FOSS programme for mac, win or linux. I recommend using the 1.3.x version, even if it’s in beta.

To open a non-audio file in Audacity, you can launch the programme and then under the file menu, go to Import and then, in the submenu, select Raw Data . . .. A dialog box will open prompting you to pick a file. Then, another dialog box will open asking you about how the file should be read. The default is fine for most applications.

After you have the file open, the next important step is to normalise it. Select the entire file and then go to the effect menu and Select Normalize . . .. A dialog box will open. The most important part is to check the uppermost box, to remove DC offset. DC offset can do bad things to your speakers or headphones, so you definitely want it gone.

Now, listen to the whole file. Just because most of it sounds like static, doesn’t mean it all will. Some files will have surprisingly interesting bits hidden in them. You cannot always guess the location of these bits by looking at the wave form, so a listen is definitely worthwhile.

Opening Applications to Data Bend Them on the Mac

Applications can sometimes sound extremely interesting, but their structure is not entirely straight forward. Let’s say you want to bend iTunes. Find it in the file system (not just on the dock) and right-click on it (which you can do by holding down the control key). A menu will appear. Select Show Package Contents. A new finder windows will appear and inside will be the contents of iTunes or whichever app you’re trying to bend.

Inside the folder Contents, there will be several other folders, containing files that are useful to the application. Most of these files are quite small; you want something that’s over a megabyte at the very least. Bigger is better. Usually, the largest files are in the folder called MacOS. Often, you will find a file that has the same name as the application. That’s often the largest file. Other large files may be lurking in other directories, so have a look around.

Once you find a file that you want to try to open, right click on it. A menu will pop up. Select Open With. In the submenu, select other. A dialog will pop up asking you to find the application you wish to use. In the bottom section, there’s a label “Enable” with a pull down list next to it. Change that to All Applications. (Leave the box below it alone!) Find Audacity and select it. You’ll see a warning message saying it’s unknown whether or not Audacity can open the file. Don’t worry about that, just click ok.

Sometimes, the file will open in Audacity, but most times, the warning was correct. However, this has all been useful. Go to the file menu in Audacity and go to Import and then, in the submenu, select Raw Data . . .. The file dialog that opens will open to show the folder containing the file that you just tried to open. So if you were trying this with iTunes, you will be looking at the contents of its MacOS folder and see the file called iTunes, which you just tried to open. Select it again, and this time it will open, after showing you the import raw data dialog.

Don’t forget to remove DC offset and listen to the whole thing.

Putting Bent FX on Audio

You can open your image files as audio and listen to them. But you can also use some image editors to open audio files as if they were images. Try putting image effects on them, like drawing extra lines or adding blur and then give a listen to the results.

Using BBCut on sounds created in real-time

Routing Audio

BBCut can be used on live input, in addition to sound files. So if you want to run the dub Synthdef, though BBCut, you can do that. Many of the issues related to this with BBCut also apply to other uses of fx and routing. For example, it’s important that the server runs things in the right order. It needs to evaluate the dub synth before the bbcut synths or else they will have no input to process. We’ll use Groups to organise synths into the right order.

Groups

Groups exist on the server. A group is a list of synths and other groups or nodes on the server. The server evaluates those lists in order. If you create two groups and know that groupA comes before groupB, then you also know that every synth attached to groupA will be evaluated before any synths on groupB. If you create a source group and an effect group after it, you know that all your sources will be calculated before the fx. This is what you want.

source= Group.head(s); // one at the head
fx= Group.after(source); // this one comes after

Group.head takes a group or a server as an argument. It creates a new goup at the head of the group or server specified. Being at the head of the list means that it goes first. We know our source group will get evaluated before anything else in the default group on the server.

Group.after creates another group that comes after it’s argument. fx will be evaluated after source.

To attach a Pbind or Event to a group, you use the group’s method nodeID:

Pbind(
 //. . .
 \group,	source.nodeID
);

// Event
(freq:440, group:source.nodeID).play;

//Synth
Synth(\default, args:[\freq, 440], target:source.nodeID);

Note that if you stop playing by typing command-. , that clears all nodes from the server. You will have to recreate your groups in order to use them again.

Busses

Busses are ways of routing audio around.

bus= Bus.audio(s,1);

That creates a mono bus on the server. If we wanted stereo, the second argument would be a 2 instead of a 1.

To write to a bus, set the out argument of a pbind, event or synth to the bus’s index:

Pbind(
	//. . .
	\out,	bus.index
).play

BBCut with busses

When working with live streams, BBCut uses a buffer to hold analysis data, so you have to create one for that purpose:

buf = BBCutBuffer.alloc(s,44100,1);

To read from a bus with BBCut, use CutStream1 instead of CutBuf:

CutStream1(bus.index, buf)

The first argument is the bus to read from and the second is the buffer to hold the analysis. It’s also important that this stream’s synths are running in the right group on the server:

CutGroup(CutStream1(bus.index, buf), fx)

CutGroups have multiple uses on BBCut, one of which is to attach them to particular groups on the server. Here, we’re attaching it to the fx group. Then, we can use it like any other CutGroup:

cut = BBCut2(CutGroup(CutStream1(bus.index, buf), fx), 
   BBCutProc11(8, 4, 16, 2, 0.2)).play(clock);

Or

group = CutGroup(CutStream1(bus.index, buf), fx);
group.add(CutBit1.new(4));
cut = BBCut2(group, BBCutProc11(8, 4, 16, 2, 0.2)).play(clock);

Tying it all together

This example is going to use a sample that I downloaded from freesound, but you can change in your own sample or use the two that came with BBCut.

(

var bus, sf, buf, clock, source, fx, bpm, dub, cut1, cut2, cut3;

SynthDef(\MCLDdub,{ |out = 0, bpm = 90, amp = 0.2, pan =0, gate = 1|
    var trig, note, son, sweep, env;

    trig = CoinGate.kr(0.5, Impulse.kr((bpm/60).reciprocal));
      // bpm / 60 = beats per second

    bpm = 90;

    trig = CoinGate.kr(0.5, Impulse.kr((bpm/60).reciprocal));
      // bpm / 60 = beats per second
    note = Demand.kr(trig, 0, Dseq((22,24..44).midicps.scramble, inf));

    sweep = LFSaw.ar(Demand.kr(trig, 0, Drand([1, 2, 2, 3, 4, 5, 6, 8, 16], inf))).exprange(40, 5000);

    son = Pulse.ar(note * [0.99, 1, 1.01]).sum;
    son = LPF.ar(son, sweep);
    son = Normalizer.ar(son);
    son = son + BPF.ar(son, 2000, 2);

    //////// special flavours:
    // hi manster
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, HPF.ar(son, 1000) * 4]);
    // sweep manster
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, HPF.ar(son, sweep) * 4]);
    // decimate
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, son.round(0.1)]);
	//bit crush
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, Latch.ar(MantissaMask.ar(son, 8), Impulse.ar(SampleRate.ir / 2))]);

    son = (son * 5).tanh;
    son = son + GVerb.ar(son, 10, 0.1, 0.7, mul: 0.3);
    env = EnvGen.kr(Env.cutoff(bpm/30, amp), gate, doneAction:2); // take 2 beats to fade
    son = Pan2.ar(son, pan, env);
    Out.ar(out, son)
}).add;

// Groups
source= Group.head(s); // one at the head
fx= Group.after(source); // this one comes after

 bus= Bus.audio(s,1); // a bus to route audio around

// break beat
sf = BBCutBuffer("sounds/drums/breaks/hiphop/22127__nikolat__oldskoolish_90bpm.wav", 16);
 // don't forget to change that line to point at your own breakbeat sample!
 
 // a buffer used by BBCut to hold anaylsis
 buf = BBCutBuffer.alloc(s,44100,1);
 
 bpm = 90;
 
 //  Clocks
 TempoClock.default.tempo_(bpm/60);
 clock= ExternalClock(TempoClock.default); 
 clock.play;  
 
 // Where stuff actually happens
 Routine.run({

	s.sync; // wait for buffers to load
	
	// start playing the dub patch without cutting
	dub = (instrument:\MCLDdub,
  		  out:0, bpm:bpm, amp:0.05, pan:0, dur:inf, group:source.nodeID)
  		  .play(clock.tempoclock);

 	// let it play for 5 seconds
 	5.wait;
 	
 	// start the drums
 	cut1 = BBCut2(CutBuf3(sf, 0.5), BBCutProc11(8, 4, 16, 2, 0.2)).play(clock);
 	
 	 // start a process to cut things coming in on the bus
	cut2 = BBCut2(CutGroup(CutStream1(bus.index, buf), fx), 
		BBCutProc11(8, 4, 16, 2, 0.2)).play(clock);
	
	// let that get going
	2.wait;
	
	// now tell the dub to start writing to the bus
	dub.set(\out, bus.index);
	dub.set(\pan, -1); // it's a mono bus, so pan hard left
	
	10.wait;
	
	// let's try a different cutProc
	
	cut3 = BBCut2(CutGroup(CutStream1(bus.index, buf), fx), 
		SQPusher2.new).play(clock);
	
	cut2.pause;
	
	// let it play for a bit
	30.wait;
	
	// fade out the dub
	dub.set(\gate, 0);
	5.wait;
	//stop everything
	cut1.stop;
	cut3.stop;
});
)

Dub Patch and Timing

Dan Stowell’s Dub Patch

Dan posted this to his blog and to the SuperCollider mailing list. I’ve made two changes to it. The first is to make the tempo clearer. I put a bpm argument in it and set it at the top. To change the bpm, you can change that number. I’ve also added an argument for amp, as this one runs really loud otherwise.

{ |bpm = 90, amp = 0.2|
    var trig, note, son, sweep;


    trig = CoinGate.kr(0.5, Impulse.kr((bpm/60).reciprocal)); 
      // bpm / 60 = beats per second
    note = Demand.kr(trig, 0, Dseq((22,24..44).midicps.scramble, inf));

    sweep = LFSaw.ar(Demand.kr(trig, 0, Drand([1, 2, 2, 3, 4, 5, 6, 8, 16], inf))).exprange(40, 5000);

    son = LFSaw.ar(note * [0.99, 1, 1.01]).sum;
    son = LPF.ar(son, sweep);   
    son = Normalizer.ar(son);
    son = son + BPF.ar(son, 2000, 2);

    //////// special flavours:
    // hi manster
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, HPF.ar(son, 1000) * 4]);
    // sweep manster
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, HPF.ar(son, sweep) * 4]);
    // decimate
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, son.round(0.1)]);

	
    son = (son * 5).tanh * amp;
    son = son + GVerb.ar(son, 10, 0.1, 0.7, mul: 0.3);
    son.dup;
}.play

Of course, there are several ways you could modify this code. If you want to make it sound more chiptune-like you could change the waveform and add in some bit crushing:

{	|bpm = 90, amp = 0.2|
    var trig, note, son, sweep;


    trig = CoinGate.kr(0.5, Impulse.kr((bpm/60).reciprocal)); 
      // bpm / 60 = beats per second
    note = Demand.kr(trig, 0, Dseq((22,24..44).midicps.scramble, inf));
    sweep = LFSaw.ar(Demand.kr(trig, 0, Drand([1, 2, 2, 3, 4, 5, 6, 8, 16], inf))).exprange(40, 5000);

	// change to a Pulse wave
    son = Pulse.ar(note * [0.99, 1, 1.01]).sum;
    son = LPF.ar(son, sweep);   
    son = Normalizer.ar(son);
    son = son + BPF.ar(son, 2000, 2);

    //////// special flavours:
    // hi manster
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, HPF.ar(son, 1000) * 4]);
    // sweep manster
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, HPF.ar(son, sweep) * 4]);
    // decimate
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, son.round(0.1)]);

	son = MantissaMask.ar(son, 8); // make 8 bit

	son = Latch.ar(son, Impulse.ar(SampleRate.ir / 2)); // halve the sampling rate
	son = LPF.ar(son, 10000); // kill high artifacts
	
    son = (son * 5).tanh * amp;
    son = son + GVerb.ar(son, 10, 0.1, 0.7, mul: 0.3);
    son.dup;
}.play

You could also Select.ar to cause the bitcrushing to turn off and on:

  //bit crush
      son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, Latch.ar(MantissaMask.ar(son, 8), Impulse.ar(SampleRate.ir / 2))]);

Let’s say you wanted to use this synth with a pbind or BBCut. You’d need to put it in a proper synthDef. I’ll take this opportunity to add panning, and an envelope.

SynthDef(\MCLDdub,{ |out = 0, bpm = 90, amp = 0.2, pan =0, gate = 1|
    var trig, note, son, sweep, env;


    trig = CoinGate.kr(0.5, Impulse.kr((bpm/60).reciprocal)); 
      // bpm / 60 = beats per second

    bpm = 90;

    trig = CoinGate.kr(0.5, Impulse.kr((bpm/60).reciprocal)); 
      // bpm / 60 = beat per second
    note = Demand.kr(trig, 0, Dseq((22,24..44).midicps.scramble, inf));

    sweep = LFSaw.ar(Demand.kr(trig, 0, Drand([1, 2, 2, 3, 4, 5, 6, 8, 16], inf))).exprange(40, 5000);

    son = Pulse.ar(note * [0.99, 1, 1.01]).sum;
    son = LPF.ar(son, sweep);   
    son = Normalizer.ar(son);
    son = son + BPF.ar(son, 2000, 2);

    //////// special flavours:
    // hi manster
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, HPF.ar(son, 1000) * 4]);
    // sweep manster
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, HPF.ar(son, sweep) * 4]);
    // decimate
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, son.round(0.1)]);
  //bit crush
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, Latch.ar(MantissaMask.ar(son, 8), Impulse.ar(SampleRate.ir / 2))]);

	
    son = (son * 5).tanh;
    son = son + GVerb.ar(son, 10, 0.1, 0.7, mul: 0.3);
    env = EnvGen.kr(Env.cutoff(bpm/30, amp), gate, doneAction:2); // take 2 beats to fade
    son = Pan2.ar(son, pan, env);
    Out.ar(out, son)
}).add;

Managing Timing of non-cut stuff when using BBCut

Now, let’s say we want to run it with BBCut. We need to start it with the same BPM as the BBCut program, and we need to use a clock to make sure it starts on time. This was covered in a previous post

 	//  The default clock.  90 is the BPM / 60 for the number of seconds in a minute
 	TempoClock.default.tempo_(90/60);

 	// BBCut uses it's own clock class. We're using the default clock as a base
 	clock= ExternalClock(TempoClock.default); 
 	clock.play;  
	
	dub = (instrument:\MCLDdub, out:0, bpm:90, amp: 0.09, pan:0, dur:inf).play(clock.tempoclock);

What dub = is an Event, which you can create by using parens like this. We’re using
an event because of the timing built in to that class. Passing the clock
argument to play means that the loop will always start on a beat and thus be
synced with other BBCut stuff. Note that I’ve set the dur to inf. This is so it doesn’t stop by itself. When I want the dub to stop, I can change the gate to 0:

  dub.set[\gate, 0];

If you want to use a Pbind amidst BBCut code, you can also use the clock to make sure it stays on time.

Pbind(
	\dur,	(90 / 60) * 4, // make each note last for 4 beats
	\note,	Prand([1, 3 ,5, 9], inf)
).playExt(clock);

Note that when you’re playing an event, you need to use clock.tempoclock, but with a Pbind, you can do a playExt and use the BBCut External Clock directly.

If you want the Pbind to play short notes, like you would get back from a BBCut CutStream, you can do that, using something called Pbindf. The first argument is a stream. That stream contains information, like durations, but we can still set notes or whatever else we want.

stream = CutProcStream(BBCutProc11.new);
  
pb = Pbindf(
	stream,
	\scale,  Scale.gong,
	\degree,  Pwhite(0,7, inf),
	\amp, 0.2,
).playt(clock.tempoclock);

Let’s try putting this all together, with a synth for that Pbind. Before you run this, evalute the MCLDdub SynthDef above.

(
	var sf, cut, dub, clock, stream, bpm, pb1, pb2;
	
	SynthDef(\saw, { |out, freq, amp, pan, dur|
  
  		var saw, env, panner;
  
  		env = EnvGen.kr(Env.triangle(dur, amp), doneAction: 2);
  		saw = MantissaMask.ar(Saw.ar(freq, env), 8);
  		panner = Pan2.ar(saw, pan);
  		Out.ar(out, panner)
 	}).add;


	SynthDef(\squared, { |out, freq, amp, pan, dur, gate = 1|
  
  		var sq, env, panner;
  
  		env = EnvGen.kr(
  			Env.adsr(dur * 0.1, dur* 0.1, 0.9, dur * 0.1, amp, 0), 
  			gate, doneAction: 2);
  		sq = MantissaMask.ar(Pulse.ar(freq, 0.5, env), 8);
  		panner = Pan2.ar(sq, pan);
  		Out.ar(out, panner)
 	}).add;
 
	
	sf = BBCutBuffer("sounds/break",8); // if you use your own breakbeat, it will sound better

	bpm = 90;


 	//  The default clock.  90 is the BPM / 60 for the number of seconds in a minute
 	TempoClock.default.tempo_(bpm/60);

 	// BBCut uses it's own clock class. We're using the default clock as a base
 	clock= ExternalClock(TempoClock.default); 
 	clock.play;  

	Routine.run({

  		s.sync; // wait for buffers to load

		cut = BBCut2(CutBuf3(sf, 0.4), BBCutProc11(8, 4, 2, 2, 0.2)).play(clock);

		5.wait;

  		dub = (instrument:\MCLDdub, 
  		  out:0, bpm:bpm, amp:0.05, pan:0, dur:inf)
  		  .play(clock.tempoclock);
		
		5.wait;
		
		// synthpads
		pb1 = Pbind(
			\instrument,	\saw,
			\dur,		(bpm / 60) * 4, // make each note last for 4 beats
   			\scale,  Scale.gong,
			\degree,	Prand([1, 3 ,5, 7], inf),
			\octave,	3,
			\amp,	0.2
		).playExt(clock);
		
		5.wait;
		
		// fade out the dub
		dub.set(\gate, 0);
		
		// now do the one with BBCut Rythms
		 stream = CutProcStream(BBCutProc11.new);
  
  		pb2 = Pbindf(
   			stream,
   			\instrument, \squared,
   			\scale,  Scale.gong,
   			\degree,  Pwhite(0,7, inf),
   			\octave,  Pwhite(3,4, inf),
   			\amp,  1,
   			\sustain,  0.01
  		).playExt(clock);

		// end things
		5.wait;
		pb1.stop;
		5.wait;
		cut.stop;
		2.wait;
		pb2.stop;
	});
)