Skip to content

Chiptunes

This is the name for music made with old computers and game devices, or music that emulates those sounds.

Historical

Atari

The first home computer game was Pong, which came out in 1972. In 1977, the same company, Atari, launched a home game computer called the VCS. This computer was later re-named to the Atari 2600. The TIA chip in that computer had two audio voices with separate waveform and volume settings for each. In 1980, a piece of software was released for the Atari 400/800, called the Atari Music Composer, which made composition possible for home users. However, most of the more interesting music was being written for a different platform.

C64

In 1982, another 8 bit computer, the Commodore 64 was launched. It was originally intended to be a business computer and designed to compete with the x86-series of computers (this kind is better known now as the PC). The sound chip was especially a selling point for home users.

The SID chip had 3 channels of audio, each of which had an ADSR envelope and ring modulation. The oscillators had a range of 16-4000Hz and could output sawtooth, triangle, pulse waves or noise. The oscillators were routed through a filter, which could act as a highpass, lowpass, band pass or notch filter. It was possible to get a fourth audio source, but using an “undocumented feature” to output 4 bit samples. This worked better on some version of the chip than others.

This chip is still popular with music makers. In 1997, a synthesiser called the SID Station came out, which used these chips. By then, they had been out of production for years.

The designer of the SID was given vague instructions and thus had a lot of freedom. He designed the best chip that he could. Later, he went on to found the Ensoniq synth company.

During the 80’s, the best known SID composer, who inspired many of his contemporaries, was a British man named Ron Hubbard. We listened to Monty on the Run, which many people consider to be his best composition. You can download the SID file of this piece from demovibes. We also listened to the piece he thinks is his best, W. A. R., which you can download from here.

In order to play these files, you need a SID emulator. I used one called SIDPLAY. This player also downloads a vast library of canonical SID pieces. However, the file format dates back to a time when disk space was a precious commodity, so each of the files takes up 8k on my disk!

The best known SID composer now is Martin Galway, who is from Belfast. He composed for multiple platforms, including the C64 and the Sinclair ZX Spectrum. We listened to music he wrote for Cosmic Bakery. He was the first SID composer to release a piece using samples, which was for Arkanoid. Both of those pieces are included in the library that comes with SIDPLAY.

Demoscene

One habit of users then, as now, was getting around DRM restrictions on games. Groups of hobbyists would crack games and share them. They started doing splash screens, taking credit for the crack. Over time, these splash screens got more and more elaborate and started having sound and graphics. Eventually, some groups focussed mostly on the splash screens, and dropped the attached cracks. These sound and animation displays were called demos and the social interactions that surrounded this was called the demoscene.

Demoscene artists were competitive, wanting to display their own skill and the power of their preferred platform. There were social events called demoparties, where people would share their demos and important programmers would display their work on projectors in front of an audience. This scene was largely based in Europe.

Current

The chiptune scene has since migrated to the web, where there are many communities, including Micromusic, the 8 Bit Collective, 8 bit peoples and Greyscale. These communities share tools and mp3s. Some of them are based on the idea of openness and are in the FLOSS scene or embrace many FLOSS ideals.

Micromusic isn’t solely chiptune music, but does chip style and lo-fi. They host music that sounds like old game music, even if ti’s made with more modern means. One of the founders of Micromusic is Emma Davidson, who publishes music as Lektrogirl. We listened to her piece –Gang Girlz, which you can download from the sidebar of her blog.

Another interesting Micromusic composer is Psilodump. We listened to The Somnambulist, which you can download from Last.fm

Greyscale is a polish chiptune collective which has an interest in the Atari 2600. This platform seems to have more new music circulating the internet than it has original game music being traded. We listened to X-Ray’s track Zizibum, downloaded from Greyscale

The ZX Sinclair also has current practitioners, including the ZX Spectrum Orchestra. We listened to Hung Up by the AY Riders. AY was the name of the sound chip in the Spectrum. Those computers down have a way to sync to each other, so when the AY Riders play, the use 2-4 machines which they have no way to sync up. Instead, they make sure they program the right timings and just try to start everything going at around the same time.

The ZX Spectrum is programmed with BASIC, so modern chiptune composers who want to use this platform will learn BASIC.

Handheld Devices

Pixelh8, a British composer, wrote software for the Gameboy and the Nintendo DS, to allow users to make Chiptunes. We listened to Game Boy Meets Game Girl.

lo-bat., a Belgian composer, also works with handheld devices. We listened to Twinkle, downloaded from his website. He’s a big proponent of the Creative Commons license, and posts his music to his website for free download. The band Crystal Castles misunderstood this license and used some of his work without asking permission, which caused a scandal.

With other Instruments

Some bands use chiptunes as a piece of music using other instruments. One example of this is a band from Birmingham called the 8 bit Ninjas. We listened to Push It.

Source for much of the information above from Flat Four. The transcripts are interesting and the podcasts have great musical examples.

How to make Chiptunes

There’s a long section on this in the interview with a chiptune artist. You can also make lo-fi sounds with other tools. If you use square waves, or other period waveforms you can get a chiptune-like sound. Also, limiting to two or three voices and using only synthesis methods found in the older chips, like ring modulation, for example.

You can kill bit depth, to make an old-style sound. In Supercollider, you can do that with a Mantissa Mask. Let’s say you have a sawtooth wave and you want to make it 8 bit:

SynthDef(\EightBitSaw, {|freq = 440, dur = 1, amp = 0.2, out = 0|

	var osc, env, mask;
	
	env = EnvGen.kr(Env.linen(dur * 0.1, dur * 0.8, dur * 0.1, amp, 0), doneAction: 2);
	osc = Saw.ar(freq, env);
	mask = MantissaMask.ar(osc, 8); // make 8 bit	
	Out.ar(out, mask);
}).play

Bitcrushing

You can also use bitcrushing techniques, like using round. If you round(0.1), then the waveform can only have values that are multiples of 0.1. If you have a signal going from 0 to 1, it will start at 0, go to 0.1, then to 0.2, etc, with no values in between.

SynthDef(\RoundedSine, {|freq = 440, dur = 1, amp = 0.2, out = 0|

	var osc, env, mask;
	
	env = EnvGen.kr(Env.linen(dur * 0.1, dur * 0.8, dur * 0.1, amp, 0), doneAction: 2);
	osc = SinOsc.ar(freq, 0, env).round(0.1);
		
	Out.ar(out, osc);
}).play

It can sound good to combine either of these methods with Latch.ar. This is a sample and hold function. If we trigger it at half the sample rate, we hold every other sample. This effectively cuts the sampling rate in half:

SynthDef(\LatchedSaw, {|freq = 440, dur = 1, amp = 0.2, out = 0|

	var osc, env, latched;
	
	env = EnvGen.kr(Env.linen(dur * 0.1, dur * 0.8, dur * 0.1, amp, 0), doneAction: 2);
	osc = Saw.ar(freq, env).round(0.1);
	latched = Latch.ar(osc, Impulse.ar(SampleRate.ir / 2));
	Out.ar(out, latched);
}).play

Ring modulation, one of the things allowed by the C64, is when we vary the amplitude of one signal by another. Remember that with the C64, each oscillator had it’s own envelope:

SynthDef(\RingSine, {|freq1 = 111, freq2 = 440, dur = 1, amp1 = 1, amp2 = 0.2, out = 0|

	var osc1, osc2, env1, env2;
	
	env1 = EnvGen.kr(Env.linen(dur * 0.1, dur * 0.8, dur * 0.1, amp1, 0), doneAction: 2);
	env2 = EnvGen.kr(Env.linen(dur * 0.1, dur * 0.8, dur * 0.1, amp2, 0));
	
	osc1 = SinOsc.ar(freq1, 0, env1);
	osc2 = SinOsc.ar(freq2, 0, osc1) * env2;
	Out.ar(out, osc2);
}).play

Speaking of envelopes, we’ve been using linear, fixed duration envelopes in this sample. Proper ADSR envelopes are Pbind-ready:

SynthDef(\RingSineGated, {|freq1 = 111, freq = 440, gate = 1, amp1 = 1, amp2 = 0.2, out = 0|

	var osc1, osc2, env1, env2;
	
	env1 = EnvGen.kr(Env.adsr(0.1, 0.01, amp1, 0.5), gate);
	env2 = EnvGen.kr(Env.adsr(0.2, 0.1, amp2, 0.1), gate, doneAction: 2);
		// with ring modulation, it doesn't matter which envelope gets the doneAction
	
	osc1 = SinOsc.ar(freq1, 0, env1);
	osc2 = SinOsc.ar(freq, 0, osc1) * env2;
	Out.ar(out, osc2);
}).add;

Pbind(
	\instrument,	\RingSineGated,
	\freq,		Pseq([440], 1),
	\freq2,	111,
	\amp1,	1,
	\amp2,	0.2,
	\dur,		1
).play

The timbrel possibilities of doing C64-like sounds are fairly extensive!

Circuit Bending

Circuit bending was invented in 1967 by Reed Ghazala when he accidentally shorted a 9 volt battery amp and heard odd sounds from it. It reminded him of the sounds that came from much more expensive synthesisers. He had no electronics training, and so rather than build several new circuits doing reliable things based on the shorts he created, he came up with ways to short out one circuit in interesting and unpredictable ways.

He was not the first person to ever find a cool sounds from an accidental short. Serge Tcherepnin, who went on to invent the Serge Modular Synthesiser, has a similar experience with a radio in the 50s. But Tcherepin was able and interested to figure out what was going on with the shorted circuit and create exact duplicates of that. Ghazala did the opposite. Instead, he took mass market predictable devices and made them unique.

Here is a short documentary about Ghazala’s circuit bending.

His website has some photos of the very cool looking bent instruments that he’s made. He also has a reference text there on how to circuit bend.

The most important point he makes it to only try to bend things running off of batteries. If you try to short something running off of mains power, you can die. So use battery operated devices only. Keep your voltage low, also, under 9 volts. Also, be on the look out for large capacitors, as they can cause nasty shocks also.

Older devices with big chips and leads that are fairly far apart are easier to bend. Use your fingers, a wire or aligator clips to try sorting out different spots on the circuit board. If it makes a cool sound, mark the spot with a marking pen.

If a short sounds good, you can wire in a switch. If your fingers sound good, you can wire in a knob or metal bits to touch.

These experiments can sometimes kill a toy, but usually won’t. He suggests wearing goggles because he had a chip explode once.

To find devices to bend, go into charity shops with batteries in your pockets: 4 AAs, C, D, etc.

In order to get some skills with soldiering, he suggests you start by building a kit of some kind. Also, looking at some books on electronics might be helpful. He suggests Getting Started in Electronics by Forrest Mimms. This can often be found used on Amazon.

He has a bunch of cool sound examples on his Bent Sound web page. We listened to Silence the Tongues of Prophecy, which is near the top of the page.

Ghazala is really a big proponent of touch points. He states,

Body contacts are also found through circuit bending. These allow electricity to flow through the player’s body, flesh and blood becoming an active part of of the electronic sound circuit. This interface extends players and instruments into each other, creating, in essence, new life forms. An emerging tribe of bio-electronic Audio Sapiens. [source]

Circuit bending is often a DIY art form. There are some YouTube videos that can help you get started:

For those who want that killer sound, but don’t want to DIY, you can buy samples of bent circuits or Speak and Spells or Speak and Maths bent by other people.

Sampling Speak and X devices has been popular since they were introduced. Kraftwerk, Jean Michel Jarre and others have all sampled it. Aphex Twin samples a bent one in 54 Cymru Beats, which we listened to, but is not on Spotify.

Of course, other toys and devices are bendable: I think FM3’s Buddha boxes are nifty example of using hardware to distribute music, even before anybody bends them.

It’d also possible to get sounds out of electronics in other ways. We watched a video by Nicolas Collins that showed how to use a guitar pickup or a coil to listen to circuits. His book on Hardware Hacking is definitely worth checking out.

Noise Music

The Futurists

In 1909, Marinetti published The Founding and Manifesto of Futurism, thus starting the movement. Futurists were inspired by the glory of war, the beauty of industry and especially the beauty of speed. They felt that industrialisation and their movement meant the end of polite art and polite society.

Russolo wrote The Art of Noises in 1913, which is also printed in Audio Culture. This was a manifesto for noise-based music, which he felt was a natural progression of music, given the changing sonic landscape of urban areas. He wrote, “today, noise is triumphant and reigns sovereign over the sensibility of men.”

In order to create noise music, he invented noise machines, called Intonarumori, which he thought would replace orchestras.

We listened to Il Risveglio Di Una Città by Russolo.

Because of the hundred year anniversary of Futurism, Luciano Chessa recreated several Intonarumori and put on some concerts with historic pieces and new compositions by current composers. We watched a short video of the noise machines:
http://vimeo.com/moogaloop.swf?clip_id=7527035&server=vimeo.com&show_title=1&show_byline=1&show_portrait=1&color=00ADEF&fullscreen=1&autoplay=0&loop=0

Luciano Chessa and his remake of Luigi Russolo’s Intonarumori. from bart woodstrup on Vimeo.

We listened to a recording of a new piece by John Butcher, which is not yet published.

Mid-Century

In 1948, Pierre Schaeffer returned to the idea of noise in music and created several montages of non-musical sounds, which he called “Musique Concrete”. He felt that the ability to record and manipulate tape increased the palette of sounds available to composers. In contrast to the ideas of the Futurists, his music was polite and academic. We listened to Etude Violette from Cinq Etudes de Bruits (Five Noise Studies).

John Cage also engaged noise, from a more radical direction. When asked to define “noise,” it’s often placed in binary opposition to musical sounds. Noise is unwanted, possibly harmful sounds of human origin. After industrialisation, this came to be associated with factories, machines and technology, which is how the Futurists and Schaeffer saw it. Cage, though called into question the idea of any sounds existing in binary opposition to music. When he wrote 4’33”, the noise became the music; thus asserting that there is no such thing as noise, it’s all music.

Industrial Music

Early industrial music was heavily influenced by William S Burroughs, a poet in the 1960’s who did cut-up pieces. He would take news articles, cut out sentences and then read them back in a random order. He did this same technique with tape.

His writings also had strange, sexual, brutal images, which also appealed to the industrial bands, who tried to be shocking.

In London, 1975, the band throbbing Gristle formed. Two of the band members, Genesis P-Orridge and Cosey Fanni Tutti were lovers and had previously been in a performance art group together, called COUM. Throbbing Gristle also had two other members who, amongst playing other things, also manipulated tape: Peter “Sleazy” Christopherson and Chris Carter.

Tutti had previously worked in porn, so they did one event that used her pictures. Another time, they did an art exhibit of used tampons and soiled nappies. They used Nazi images and did extreme things on stage, in an effort to shock. They did this to represent the alienation and despair some people felt as a result of Thatcher’s policies. P-Orridge wrote

it’s basically about the post-breakdown of civilization. You know, you walk down the street and there’s a lot of ruined factories and bits of old newspapers with stories about pornography and page three pinups, blowing down the street and you turn a corner past the dead dog and you see old dustbins. And then over the ruined factory there’s a funny noise. (in Ford Wreckers, 6.28 quoted in Hegarty p 108-9)

P-Orridge may have been drawn to making shocking art because s/he was already considered shocking by society. By being intentionally shocking, s/he’s able to take control of the situation.

We listened to Slug Bait 2.

Another industrial band from London, formed in 1978 is Nurse With Wound. They were influenced by jazz, Krautrock and Throbbing Gristle. We listened to Darkness Fish, which is not on Spotify.

Japanese Noise

Merzbow is the king of Noise music. He formed a record label in 1979, in Japan. His music, and the Japanese scene in general is not angry like Throbbing Gristle. However, like them, he uses images of pornography, ritualised eroticism, etc. We listened to SCUM-Steel Cum 7″ – side A,which you can download from http://blog.wfmu.org/freeform/2008/05/vinyl-finds-m-1.html.

(All the above heavily informed by Noise / Music: A History by Paul Hegarty, which is highly recommended.)

Glitch

Closely related to noise is glitch music. The band Oval, for example, put stickers on CDs to make them skip. Ryoji Ikeda is a Japanese glitch artist who lives in Paris. We listened to Test Pattern #0101, which has a lot of high frequencies.

Another glitch artist is Terre Thaemlitz, from New York City. His music is heavily informed by gender theory. He considers himself to be non-essentialist transgender. We listened to There Was A Girl/There Was A boy.INTERSEx from his album Interstices, which is heavily informed by gender theory. In this track, he applies glitch techniques to text.

Current Noise

Survival Research Labs is an art group from San Francisco that does dangerous exhibits in which the audience could be injured. They have robots that fight and shoot out fire. Things with fast-moving, heavy objects. The point is to violate Health and Safety. We listened to a noise track they did, October 24, 1992 Graz, Austria.

Blectum From Blechdom was a hard to classify electronic band from Oakland, California. Both members were graduates of Mills College. They made music using toys and noise sounds and were generally very odd, although were also cute in that they would dress up in costumes and jump up and down in an excited way. They got to be popular in Japan. We listened to a couple fo tracks from their album The Messe Jesse Fiesta.

Women Take Back the Noise was a set of compilation CDs containing noise music made by women. They did this project to promote music by women and combat institutionalised sexism in the noise scene. The packaging of the product addresses issues of femininity and masculinity. Noise is often considered a very masculine genre, so they put pictures of flowers on the CDs. The package has on it a circuit bent noise-maker, which is controlled by touching a metal spike embedded in a fabric flower. In this way, they challenge gendered notions of what is noise music and who can make it.

From that compilation, we listened to Ming by Khate, Cracked Mandible by Insect Deli and They Look As Innocent As Newborn Lambs. The Sick Fucks. by Fe-Mail.

Finally, we listened to Isis by Venison Whirled, which is Lisa Cameron of Texas. She gets the name from a shop called Venison World, where you bring in a deer that you’ve killed and they butcher it for you.

How to Make Noise

A lot of noise sounds are things going wrong in electronics, so one technique is to use zero-input mixers. This is when you get a mixer, plug nothing in it and then crank it way up to get the line noise of running an electronic device. Abusing microphones is a good way to get noise. Feedback and peaking are both good noise sources. Contact mics are also very useful. You can attach them directly to machines, like clothes washers or motors or attach them to pieces of metal and then saw or drill the metal.

Data Bending

There’s a technique called data bending where you put an audio header on non-audio files. You can do this with SoundHack. Under the File menu, select “Open Any . . .” and open a file. Then, under the Hack men, select “Header Change.” Pick how many channels you want and then then pick an encoding. Most AIFFs and WAVs are 16 bit linear, but you can try out other encodings to see how the sound changes. Click “Save Info.” This should cause any changes to your original file. In order to save your changes, go to the File menu and Select “Save a Copy.” There, you can pick the format that you want to save as. This does need to match the header you picked. You will need to edit the file name so that it ends in .wav or .aiff or whatever is appropriate to the format that you choose. SoundHack is slightly flaky, so you may need to save the file before you can listen to it. Different types of files will have different sounds, so try a lot out. Photoshop files, I think, are very nice.

Interview with a Chiptune Artist

An artist called Inverse Phase, (who wishes to remain otherwise anonymous) agreed to answer some questions about chiptune music. He or she has a bunch of music on his or her bandcamp website. I listened to a few of those and then sent off several questions:

Do you use ‘obsolete’ hardware as sound sources? If so, what are you using?

This is a bit of a yes/no question. My NES/Famicom and Commodore 64 music is written on a PC. I don’t have the gear to get my music onto actual hardware (for example, I can’t burn ROMs for cartridges), but if I had that gear, it wouldn’t take much effort to get the music onto those machines. Fortunately, the music comes out of my PC sounding very very close to the real deal, so it’s not so bad.

However, I just recently got an MSX computer. I can easily write to the floppies the MSX takes in my PC, so I will probably be using real hardware when I start writing and releasing MSX music.

How are you programming your old computers/whatever? Are you using
new tools for making chiptunes or tools that were available when the
hardware was current?

To compose, I use a “modern” tracker on my PC. My tools aren’t exactly new; my tracker is 15 years old, the concept under which it operates is 20-25 years old. They would run on an old 386. There are (now) native tools on both older systems that look similar to what I use, but I don’t use them.

I’ve created templates with the appropriate types of sounds for whichever system so I can listen inside the tracker. I can then take the files my tracker generates and “compile” them into something a Nintendo or Commodore could play, or run them through an emulator. As I mentioned above, my tracker’s output is so close that I often just render from it directly.

Understanding the tools available “back then” is often confusing for the uninitiated. On old systems, usually musicians were provided with tools (or instructions) directly from programmers on what to do and the programmer implemented the resulting material. The important thing is, almost every company, and often different titles from each company, used their own homebrew toolset. None of the tools were public, and most were kept under lock and key by the developers.

For the Commodore, there *were* a few public tools, but these days folks that want to write C64 music will either use their own tools or possibly a package called GoatTracker, which I’ve dabbled in a little bit.

An interesting Commodore aside is that frequently computer game crackers would attach small “intros” with some scrolling text, a neat visual effect or graphic, and a catchy tune. Other groups responded by attaching their own intro. This eventually evolved into the “demoscene”, where folks try to cram cool multimedia graphics and sound into a smallish program that is required to render it all from scratch, often pushing hardware to its limits.

For the Nintendo, it’s a slightly different world. Composers in the NES days didn’t write music on actual hardware. Many used computer software, or even text editors and a predefined idea of what to type to get certain notes and sounds. There was no “scene” for the NES in the 80s. Unlike the C64, you couldn’t just download a new program and try it out on your Nintendo. Those writing music for the NES back then were working for a company that had an official license from Nintendo to produce software.

Did you write your own tracker software or do you use tools that are
available for purchase / download?

I didn’t write my own tracker. The tracker I use isn’t capable of making sound on its own without supplying waveforms, however. So, I did have to do some work after setting up the tracker. I use a combination of free software as well as a few of my own tools/scripts.

What are the names of the tools you use the most? Are there any
tools you would recommend to a beginner?

I use a bunch of Unix utilities that probably no one cares about. =] I would personally (and highly) recommend that anyone trying to get started with NES music tries out FamiTracker (requires windows) from here:

http://famitracker.shoodot.net/

More advanced tracker users can use GoatTracker (mac, win, linux) from here to write Commodore 64 music (look under Tools and then scroll down a little):

http://cadaver.homeftp.net/

And if you guys want to have fun with the tracker environment without being limited to a specific sound chip, MilkyTracker is great (mac, win, linux):

http://milkytracker.org/

With MilkyTracker you may want to start with an existing module (song) and then just zap the note data so you don’t have to go through the process of building a collection of instruments. While it’s polite to ask permission to use instruments from other artists, some of them don’t care and others are quite unreachable, plus this is just for the purpose of messing around.

http://modarchive.org/ contains a bunch of songs MilkyTracker can load.

Have you modified or circuit bent your hardware?

I think circuit bending and modding is neat. While many folks doing it produce fascinating results, I prefer unmodded hardware. My fondest memories of game and synth music are with genuine hardware, and that’s what I aim to recreate. When I eventually write some MSX music, I do plan to use expansion cartridges, but again, it will be unmodified hardware that was available. I just prefer predictable things to happen when writing music.

I listened to several of your tracks on bandcamp and it sounded like
you were only using old technology or old-style sounds. Do you ever
mix in more recent sounds?

Ok. Technically speaking, chipmusic is music that is to be reproduced by a chip. Now, if you want to get pedantic about it, chips in your computer are reproducing each sound your PC makes, but in my opinion chiptune music is really about older sound chips. I feel like it wouldn’t be chipmusic if I added anything to the music that the original sound chips couldn’t produce.

Having said that, there are numerous bands using chipmusic (very successfully) as a musical element. Good examples are Chromelodeon (now defunct), Machinae Supremacy ([last.fm] ), Anamanaguchi (), I Fight Dragons ()… I enjoy listening to them, but while I do write music with modern sounds, I don’t release them under Inverse Phase, which is a chiptune-only moniker. Also, fun fact: I am completely incapable of playing a live instrument, so jamming along on a guitar isn’t an option for me.

Your sounds is also really idiomatic – like these could actually be
game sound tracks in terms of aesthetics as well as technology. Is
that because of the tools you use? Or is it a choice to emulate game
sound tracks?

Game music and I go way back, so there’s a certain bond that consistently (and heavily) influences my style, but the main reason my tunes sound like game music is easy; that’s what the system sounds like. In the case of the Nintendo, the hardware *cannot* produce sounds other than two square waves, a triangle wave, fuzz, and play extremely limited samples which often amounts to a few drums and an orchestral hit.

In lay-terms, think of it this way. Would you ask a pianist why their music doesn’t sound like a trombone or a flautist why they don’t play guitar? =]

Do you perform chiptunes live? Is your gear giggable?

I’ve played some shows. I have a giggable laptop and video effects box that I can take on the road with me. I don’t see why I couldn’t take my MSX with me once I start writing music for that. It’s just a matter of doing something interesting on stage while the music plays since most of it is already written once you get there. Sometimes you dance (or in some artists’ case, writhe like you’re having a seizure) to your music and trigger patterns, other times you’ll mess with volumes or solo parts of your song. Others even prepare live versions where they can loop parts of their song to screw with it more. It’s up to the artist.

What draws you to these old types of sounds? Why chiptunes?
Did you game a lot as a kid? (How old are you, if you don’t mind me
asking?) Are these sounds from your youth?

I’m old. *gasp* 31!

Yeah, video games played a large role in my childhood. I do think chiptunes are fascinating to me in part because of my video game “heritage” but three other things contribute to why I’m into chiptunes:

1. I was around when your PC was only able to beep, and I remember it being a big deal to get more than that out of my computer. Watching the evolution of sound in electronics really gave me a lot of respect for those that can create something aurally pleasing with something so limited.

2. I wanted to be a part of that group of people. I wanted the challenge. And when I took it on, I found it enjoyable. I liked being limited by the hardware and coming up with creative ways to make it do what you want.

3. Square waves and whitenoise are some of the purest forms of sound a computer or game system can make. There’s a certain beauty in simplicity.

Who are your influences?

I cite Virt (Jake Kaufman), RushJet (Tadd Nuznov), and Dino Lionetti as my influences. There are others, and I hate leaving people out, but really, their music is the core of what encouraged me and got me going.

I’m under the impression that everything you posted to bandcamp is a
cover? Do you also write music?

I do write originals, but most of them are old. I did write music for a few old DOS games back in high school. I intend to eventually release one or two original chiptune albums, or maybe even a nice redone version of my old soundtracks. With my personal site down right now, all I have to offer is this:

http://music.cancerdrive.org/track/unit-f-vrc6

Can you describe the process you used to make F___ed 6502? Did you type it out as code or sequence it? Is it multi-tracked? What gear am I hearing?

So, what I do (barring getting the song stuck in my head and checking to see that no one has covered it already) first is listen to the song and figure out which elements of the song will line up with the “instruments” available to me on, say, the NES. For example, the triangle goes an octave lower than the square waves do, so hopefully I can get that to line up with the bass.

Now look at the following screenshot:

[Chiptune Tracker Screenshot]

Basically what we have here is a step-sequencer. The most important parts to pay attention to are the bottom pane which is a “pattern” of 64 notes (four bars of four beats at a slow tempo), and the upper left corner which is the pattern “order”, where we arrange patterns we have created or tell the song to repeat certain patterns (for example, the chorus often sounds the same).

The keys on the keyboard (look down, it’s the same one you’re using) map to musical notes in a manner similar to a piano keyboard and you can use the arrow keys to move to where you want to edit. I leave “editing mode” off, tap out a little bit of the song and make sure I can play it, then turn editing mode back on, and enter several notes and space them out according to when to play them. There is no designation of the amount of time a note is played; when the note is done, you set its volume to zero. Since the chip is so limited, though, you’ll most likely just play another note (which will interrupt whatever is playing in that channel).

Lots of people think this looks like programming because there are a lot of numbers and letters everywhere (and some people don’t like to read [grin]). Aside from being in hexadecimal, it all makes perfect sense if you know what it means. It’s just “note+octave, instrument number, volume, effect”. If something is left out, the replayer doesn’t do anything, and just continues to the next row, kind of like a music box.

Looking at the bar highlighted in blue in the middle, we have:

  1. Square 1: Play note D, fourth octave with instrument 1 at top volume.
  2. Square 2: Play note D, fifth octave with instrument 3, top volume, effect.
  3. Triangle: do nothing
  4. Noise: do nothing
  5. DPCM: Play a “C sharp in octave 3” with instrument 8, maybe a bass drum.

Keep in mind “instrument” is a really loose term here, it’s mostly just a definition of, for example, which square wave(s) to play and which volumes to use, and if there should be an attack or decay, or switching back and forth between two squares. bla bla bla. There is no “real” instrument. =]

The effects don’t require black magic to know, they’re documented:
http://famitracker.shoodot.net/wiki/index.php5?title=Effect_list

So, I start with a part of the song I can easily recognise and get to work laying out the notes, one element of the music at a time. Once I have something solid I take chord progressions, drumbeats, etc (things that recur throughout the song), copy and paste them to other patterns and change them if they differ slightly, and rewrite the new portions of the song into the empty space.

Slowly I get to the point where I can re-order those patterns and get something that sounds close to the original song. Sometimes I’ll beatmatch the tempo of my tune with the original and play them on top of eachother so I can hear if I missed any notes and that the ones that are there are correct. It produces a neat effect, to boot.

If all goes well and I can stay *extremely* focused, I’m looking at about an hour or so (based on difficulty) per minute of music time. Unfortunately, us nerds generally have ADD and are easily distracted by shiny things…. oh well. =]

Thanks so much for your time. I’m going to encourage my students to start writing chiptunes!

Writing chiptunes sounds daunting but really it’s pretty easy once you get the hang of it, and fun, too! It only really takes an hour or so to get a good grasp on using a tracker.

Oh, by the way, if you want to take the extra nerdy step, check out these packages for running on-hardware (or in an emulator):

LSDJ (the most popular gameboy package): http://www.littlesounddj.com/

NTRQ (a nifty tracker by an old NES musician): http://blog.ntrq.net/

Live Coding

Pre-History

Live-coding traces it’s roots back to early networked computer pieces. The first ever networked piece was at Mills College in Oakland, California in 1977. Rich Gold and Jim Horton did a duo using KIM computers. This is the same kind of computer as used by David Behrman.

KIM computers had hexadecimal keypads for all data entry. They had to be programmed directly in machine code. Despite the high barrier to entry, a lot of people around the Mills scene were getting into computer music. They formed The League of Automatic Music Composers, which also included a few analog synthesisers. They used algorithmic music structures with live human interaction.

They would get a hall for all day on a Saturday and spend the day hooking their computers to each other by soldering them to interrupt busses or other locations. This was a dangerous process, as getting the wrong soldier points could fry their computers. Each computer would be programmed to respond to incoming data and to generate data to send on to the others. They described this as “non-hierarchical, interactive, simultaneous processes.”

The computers controlled things like pulse width generators. After all of them were communicating with each other, the public could come in and listen to the result of this long setup. The League called these “public occasions of shared listening” and would sometimes go sit with the audience while their processes ran.

We listened to a video, but I didn’t project it because of the tiny size. You may want to watch it now. There are other videos embedded in the paper from which I got that one.

The Hub

In 1986, John Bischoff and Tim Perkis founded the Hub. They literally had a hub – this was a digital switch running on a KIM computer. Instead of dangerously wiring their computers to each other, they could just connect to a shared service. This works a lot like network hubs do now. The group got bigger and they built better hubs. In 1987, they had two identical hubs with modems, so they could communicate remotely. They used this setup to play in two geographically separated trios at a concert at the Clock Tower in New York City. This was the first ever concert with distant groups doing networked communication.

We listened to some documentation of the concert, which is also available as a video

The media was very taken with the idea of musicians literally ‘phoning it in’ and they got a lot of press related to networked performance, although this was not really their focus.

The hub carried on for another ten years, going on to abuse MIDI hardware, until 1997, they did a very high-profile concert which focussed on the networked performance aspect. They had musicians located in three locations, each separated by hundreds of miles. One of those locations was Mills College, where I was in the audience. There were a lot of computer crashes (and in those days, the startup charm went out to whatever audio device you had plugged into the computer, so the crash sounds boomed out over the halls’ speakers). The concert was a bit of a disaster and the group subsequently broke up.

Some of their tools live on, however. HMSL has peen ported to Java as JMSL. It is not free, alas, but apparently now works with Max. (If you like Java-based music, you can also check out JSyn which it seems like you don’t have to pay for if you’re just writing your own music with it, but if you want to write apps, you have to pay).

Source for this section mostly from “Indigenous to the Net: Early Network Music Bands in the San Francisco Bay Area” by Chris Brown an John Bischoff, 2002.

Live Coding as We Now Know It

The first live coding concert as we now know it: sitting on stage and writing a program to generate music was Ron Kuivila at STEIM in Amsterdam in 1985. We listened to the track Water Surface, which was reconstructed in 2007. It’s on “A prehistory of live coding,” which is not on Spotify or in the library.

The so-called “projection era” where bands project the contents of their monitors onto screens viewable by the audience, started in 2000 in London with the band Slub. They use multiple languages, including perl and Scheme bricks, which you can download from http://www.pawfal.org/dave/index.cgi?Projects/Scheme%20Bricks. It requires Fluxus.

We listened to a track by Slub called 20060401folded off of the “A prehistory of live coding,” CD. They have a lot of tracks on their website.

Why Live Code?

There used to be t-shirts which said, “I’m not just checking my email” because live laptop performance was somewhat opaque to audiences. There are different ways to make laptop performance more interesting. One is to use a very large number of speakers, like Birmingham does. Another is to use gestural controllers, so people can see things happening. And another strategy is live code.

In 2000, code was cool. This was shortly after the “Open Source” movement split off from the Free Software Movement, so there was a lot of political chatter about source code going on with developers. You could buy posters with the source for the Linux kernel in the shape of a penguin. In 1998, Netscape decided to make their browser (now known as Firefox) open source. They had a huge party at the Sound Factor in San Francisco, California and at the party, there were projectors showing the source code scrolling by, with green text on a black background. (pictures and an account of the party are here.)

Live coders were also almost certainly influenced by the ideas in Kim Cascone’s article “The Aesthetics of Failure: ‘Post-Digital’ Tendencies in Contemporary Computer Music,” which you can find in Audio Culture on page 392 or online and you should read. Coding live, in front of people while they look at your screen is like walking a tightrope without a net.

Showing your code is also more open and lets other people know how you’re making music, so it has an aspect of sharing. A group called TopLap, which is interested in promoting live coding formed in the first decade of the 21st century. They have a high degree of overlap with OpenLab, which is a group interested in sharing knowledge. OpenLab is influenced by the GNU Public License and the ideals of sharing in FLOSS. They’re also influenced by social centres, the squatter movement, and Cardew’s Scratch Orchestra (according to Simon Yuill’s article in the FLOSS+Art book) so anarchist ideals of sharing and openness are also a big influence.

This emphasis on sharing is present in the TopLap manifesto, which demands:

  • Give us access to the performer’s mind, to the whole human instrument.
  • Obscurantism is dangerous. Show us your screens.
  • Programs are instruments that can change themselves
  • The program is to be transcended – Artificial language is the way.
  • Code should be seen as well as heard, underlying algorithms viewed as well as their visual outcome.
  • Live coding is not about tools. Algorithms are thoughts. Chainsaws are tools. That’s why algorithms are sometimes harder to notice than chainsaws.

The sharing ethos extends to live coders helping each other in social ways. They do something called a HackPact, where people write something every day for a month, in order to get better. Live Coding is a skill that takes practice and by doing something for a month in this way, practitioners can improve with the help of social pressure not to slack off.

redFrick tried to do a live coding performance with too little practice and it was a disaster for him (source) so he was one who did daily practice. We listened to a track by him called Aug19, off of the same disk as the last two.

redFrick uses SuperCollider, which is a language that’s fairly popular with live coders. Another SuperCollider live coder is Marije Baalman. She’s one of the core developers of SuperCollider. We watched a video of her live coding on a Greyhound bus, which you can download from here.

On the video, you can see her using Node Proxies and JIT lib, which were developed for live coding.

mcld is another SuperCollider, based live coder, who does live-processing of beat boxing. Codebox 30min jam 1 by mcld

Another SuperCollider-based system for live coding is ixi Lang by Thor Magnussen. You can download it from http://www.ixi-audio.net.

I did a short demonstration of it in class, but you can watch video tutorials of it, which are very compelling. In order for this to work, you must make sure that MIDI is enabled on your computer.

Some live coders use ChucK, which is what PLOrk, the Princeton Laptop Orchestra uses. Each computer has it’s own speaker, so the laptops become more like instruments. We listened to a bit of Favorite Things or Titre francais avec un petit Mondrian, which can be downloaded from the PLOrk website.

Also, check out this video: http://player.vimeo.com/video/9790850

Algorithms are Thoughts, Chainsaws are Tools from Stephen Ramsay on Vimeo.

Reactive Music

This is music that is not coded live, but rather reacts to a user in real-time. RjDj is an iPhone app that allows users to play reactive music “scenes.” These are pieces that use microphone input or sensor data to modify the music as it plays. It reacts to the environment in which the user listens to it. The user can make recordings of their interactions with the scene and then share them via the website. We listened to a bit of one such recording. Many, many, many of them can be found on the company’s website.

You can make highly canned reactive music with their tool RJC1000. Or, their system runs on PD with their composers pack. You need the vanilla form of PD. Their pack provides extra functionality like the ability to access sensor data and things useful for making pop music. They have a wiki for composers and sometimes do training sessions at their London headquarters. There are also some example scenes you can download and they encourage you to look at the contents of scenes created by their in-house group, Kids on DSP.

If you want to do either of your projects on RJDJ, that’s fine, just please let me know.

Getting Started with BBCut

(This x-posted to How to Program in SuperCollider)

Installing

BBCut2 is a nifty library for doing break beat cutting. To use it, you must first install it. It is not available as a quark, alas. To get the library, download it from http://www.cogs.susx.ac.uk/users/nc81/bbcut2.html. Then unzip it. Inside, you will find several directories.

  1. Move the “bbcut2 classes” directory to ~/Library/Application\ Support/SuperCollider/Extensions . That tilda represents your home directory, so if your user name is nicole, you would put the file in /Users/nicole/Library/Application\ Support/SuperCollider/Extensions . If the Extensions directory does not exist on your system, then create it.
  2. Put the “bbcut2 help” directory inside the classes directory that you just moved.
  3. Put the “bbcut2 ugens” directory in ~/Library/Application\ Support/SuperCollider/Extensions/sc3-plugins . If this directory does not exist, then create it.
  4. Take the contents of the “bbcut2 sounds” directory and put them in the sounds folder with your SuperCollider application. so if you have SuperCollider in /Applications, you would put the contents of “bbcut2 sounds” in /Applications/SuperCollider/sounds

Then, re-start SuperCollider. Depending on what version of SC you have, you may have duplicate classes. If you do, there will be errors in the post window. If you see that this is a problem for you, go find the files in the BBCut classes and delete them, making sure to keep the other copy. The error message will tell you where to find the files and which ones they are.

The Clock

BBCut relies on a clock. When I’m coding, I usually base the clock off the default clock:

 
 TempoClock.default.tempo_(180/60);
 clock = ExternalClock(TempoClock.default); 
 clock.play;  

The tempo is defined as beats per second. That’s beats per minute, divided by 60 seconds. In the above example, the clock rate is 180 bpm, which is then divided by 60 to set the tempo. If you wanted a clock that was 60 bpm, you would set tempo_(60/60), or for 103 bpm, it would be tempo_(103/60)

BBCut uses an ExternalClock, which uses a TempoClock, so in the above example, I give it the default TempoClock. I don’t have to use the default one, but could declare a new one if I wanted: clock = ExternalClock(TempoClock(182/60));

The next step is to the clock to play. If you forget this step (which I often do), nothing happens later on. Telling the clock to play is important. BBCut relies on this clock.

Working with Buffers

There is a special sort of buffer used by BBCut, called a BBCutBuffer. The constructor for this takes two arguments. The first is a string which should contain the path and file name of the file. The second argument is the number of beats in the file. For example, we could open one of the sound files that came with BBCut:

 
 sf= BBCutBuffer("sounds/break",8);

We need to wait for the Buffer to load before we can start using it. One way to do that is to put the code that relies on the Buffer into a Routine. And then, we can tell the Routine to wait until the server is ready to carry on.

 
 sf= BBCutBuffer("sounds/break",8);
 
 Routine.run({
  s.sync; // this tells the task to wait
  // below here, we know all out Buffers are loaded
   . . .
 })

Now we can tell BBCut that we want to cut up a buffer and get it to start doing that.

 
  cut = BBCut2(CutBuf3(sf)).play(clock);

BBCut2 is the class that runs everything, so we make a new one of these. Inside, we pass a CutBuf, which is a class that handles Buffer cutting. We tell the BBCut2 object to play, using the clock. This starts something going.

Cutting is much more interesting if it can jump around in the buffer a bit:

 
  cut = BBCut2(CutBuf3(sf, 0.4)).play(clock);

We can specify the chances of a random cut. 0.0 means a 0% chance and 1.0 is a 100% chance. We can set the chances at any numbers between and including 0.0 to 1.0. If we want a 40% chance of a random jump, we would use 0.4.

Cut Procedures

We can tell BBCut to use one of several cut procedures. The original one is called BBCutProc11.

 
  cut = BBCut2(CutBuf3(sf, 0.4), BBCutProc11.new).play(clock);

It can take several arguments, which are: sdiv, barlength, phrasebars, numrepeats, stutterchance,
stutterspeed, stutterarea

  • sdiv – is subdivision. 8 subdivsions gives quaver (eighthnote) resolution.
  • barlength – is normally set to 4 for 4/4 bars. If you give it 3, you get 3/4
  • phrasebars – the length of the current phrase is barlength * phrasebars
  • numrepeats – Total number of repeats for normal cuts. So 2 corresponds to a
    particular size cut at one offset plus one exact repetition.
  • stutterchance – the tail of a phrase has this chance of becoming a repeating
    one unit cell stutter (0.0 to 1.0)

For more on this, see the helpfile. In general, the cut procedures are very well documented. Here’s an example of passing some arguments to BBCutProc11:

 
  cut = BBCut2(CutBuf3(sf, 0.4), BBCutProc11(8, 4, 2, 2, 0.2)).play(clock)

We can tell the cutter to stop playing, or free it

 
  cut.stop;
  cut.free;

Putting all of what we have so far together, we get:

 
(
 var clock, sf, cut;
 
 TempoClock.default.tempo_(180/60);
 clock = ExternalClock(TempoClock.default); 
 clock.play;
 
 sf= BBCutBuffer("sounds/break",8);
 
 Routine.run({
  s.sync; // this tells the task to wait
 
  cut = BBCut2(CutBuf3(sf, 0.4), BBCutProc11(8, 4, 2, 2, 0.2)).play(clock);
 
  30.wait; //  // let things run for 30 seconds
  
  cut.stop;
  cut.free;
 })
)

There are several other cut procedures, like WarpCutProc1 or SQPusher1 or SQPusher2. If you go look at the main helpfile, you can find which ones are available. This file is called BBCut2Wiki (and is found at ~/Library/Application\ Support/SuperCollider/Extensions/bbcut2\ classes/bbcut2\ help/BBCut2Wiki.help.rtf or by selecting the text BBCut2Wiki and typing apple-d )

Polyphony

The clock keeps things in sync, so you can run two different cut procedures at the same time and have things line up in time.

 
  cut1 = BBCut2(CutBuf3(sf, 0.4), BBCutProc11(8, 4, 2, 2, 0.2)).play(clock);
  cut2 = BBCut2(CutBuf3(sf, 0.2), WarpCutProc1.new).play(clock);

You can even mix and match sound files:

 
(
 var clock, sf1, sf2, cut1, cut2, group;
 
 TempoClock.default.tempo_(180/60);
 clock = ExternalClock(TempoClock.default); 
 clock.play;
 
 sf1= BBCutBuffer("sounds/break",8);
 sf2= BBCutBuffer("sounds/break2",4);
 
 Routine.run({
  s.sync; // this tells the task to wait
 
  cut1 = BBCut2(CutBuf3(sf1, 0.4), BBCutProc11(8, 4, 2, 2, 0.2)).play(clock);
  cut2 = BBCut2(CutBuf3(sf2, 0.2), WarpCutProc1.new).play(clock);
 
  15.wait;
  cut1.stop;
  cut2.stop;
 })
)

If you want to also sync up a Pbind, you can use BBcut’s clock via the playExt method:

 
Pbind.new(/*  . . . */ ).playExt(clock);

Or, if you want to play an Event, you can use the tempo clock associated with the external clock

 
Event.new.play(clock.tempoclock);

Groups and FX

If we want to add fx to the chain, and take them back out, we can use a thing called a CutGroup:

 
  // make a group with a Buffer
  group = CutGroup(CutBuf3(sf1, 0.4));
  // then send it to get cut up
  cut1 = BBCut2(group, BBCutProc11(8, 4, 2, 2, 0.2)).play(clock);
  // then put some FX in the chain
  group.add(CutMod1.new);

The CutGroup acts like an array, which holds our CutBuf and also the fx. To get an idea of how this works, try running the following code, adapted from CutGroup’s help file:

 
(
var sf, clock;
 
clock= ExternalClock(TempoClock(2.5)); 
  
clock.play;  
  
Routine.run({
   
sf= BBCutBuffer("sounds/break",8);
 
s.sync; //this forces a wait for the Buffer to load
 
g=CutGroup(CutBuf1(sf));
 
BBCut2(g, WarpCutProc1.new).play(clock);
});
 
)
 
//run these one at a time
g.cutsynths.postln; //default CutMixer was added
 
g.add(CutComb1.new);
 
g.cutsynths.postln;
 
g.add(CutBRF1.new);
 
g.cutsynths.postln;
 
g.removeAt(2);  //remove comb
 
g.cutsynths.postln;

Note that the fx that you add on start with index 2.

And also notice that you may get some errors in the post window when you remove fx: FAILURE /n_set Node not found. These are not usually a problem.

Tying it all Together

Here’s an example using everything covered so far:

 
(
 var clock, sf1, sf2, cut1, cut2, group;
 
 TempoClock.default.tempo_(180/60);
 clock = ExternalClock(TempoClock.default); 
 clock.play;
 
 sf1= BBCutBuffer("sounds/break",8);
 sf2= BBCutBuffer("sounds/break2",4);
 
 Routine.run({
  s.sync; // this tells the task to wait
 
  group = CutGroup(CutBuf3(sf1, 0.2));  // make a group with a Buffer
  cut1 = BBCut2(group, BBCutProc11(8, 4, 2, 2, 0.2)).play(clock);  // then cut it up
  
  5.wait;
  
  cut2 = BBCut2(CutBuf3(sf2, 0.2),
   BBCutProc11(8, 4, 4, 2, 0.2)).play(clock); // start more drums from the other sound file
  
  5.wait;
  
  group.add(CutComb1.new); // put some FX on the drums in cut1
  
  15.wait;
  
  group.removeAt(2); // take the fx back off
  
  1.wait;
  cut2.pause;
  
  4.wait;
  cut1.stop;
 })
)

Summary

  • You must download BBCut2 from a website and install it by moving folders around.
  • The main BBCut helpfile is BBCut2Wiki
  • BBCut uses it’s own clock, called ExternalClock, which relies on TempoClocks.
  • You must remember to start the clock
  • BBCut uses it’s own buffer class called BBCutBuffer. The constructor takes two arguments: a string with the path and filename, and the number of beats in the sound file. If you get the number of beats wrong, your results may sound weird.
  • There are several cut procedures, one of which is called BBCutProc11. To use it, (or any other cut procedure in it’s place), you use the construction BBcut2(CutBuf1(sf), BBCutProc11.new).play(clock)
  • The cut procedures all have good help files.
  • Due to the magic of clocks, you can start two BBCuts going at the same time and if they have the same clock, they will line up.
  • It is possible to add FX or remove FX from your chain by using CutGroup.

Practice

  1. Get some of your own drum loops or rhythmic samples and try this out. You can find many loops at http://www.freesound.org/. Also, try some files with sustained tones.
  2. Experiment with the arguments to BBCutProc11. Try your files with several different values.
  3. Try out the different cut procedures. Look at their help files to find their arguments.
  4. The FX are not as well documented, but they’re located in ~/Library/Application\ Support/SuperCollider/Extensions/bbcut2\ classes/cutsynths/fx . Try several of them out.

Ambient and Drone

1934 to the 70’s

Muzak was started in 1934 and until 1987 was a source of extremely annoying background music of the 1001 Strings, elevator music variety. John Cage first proposed a silent piece as something for Muzak to play. In 1969, the General assembly of the International Music Council of UNESCO passed a resolution condemning Muzak. In 1973, R Murray Shafer was pamphleting against Muzak.

Then, in 1974, Brian Eno was hit by a taxi cab and got the idea for music that fit into the environment while he was recovering in bed. For more, see his essay in Audio Culture pg 94-97. In 1978, he released Music for Airports. Thus ambient music was invented. He wanted to “induce calm and a space to think” and use the music as a tinting effect. It could be used “to modify our moods in almost subliminal ways.” (quotes from Ocean of Sound by David Toop around page 9-ish)

Ambient music must be “as ignorable as it is interesting” and thus can not be programmatic, as people may drop in and out of listening at any time.

The 80’s

New Age is and was a quasi-spiritual movement that incorporates ideas like using Tarot Cards to make decisions, believing in horoscopes, psychic powers, communicating with aliens or angels. There is no single set of beliefs. Adherents pick the ones that work for them. There is a tendency to idealise an imagined pre-Christian utopic past and a tendency towards problematic exoticising appropriation of eastern beliefs. But, again, adherents can vary widely in their beliefs and practices.

They wanted music that would be unobtrusive and useful for things like meditating, doing yoga, seeing a massage therapist or whatever. This is related to ambient music because it seeks to induce calm and relaxation. In 1981, Tower Records in Mountain View, California added a new age bin. Hearts of Space was an important radio show and record label, which played this music, also called “space music.”

It was optimistic and represented the future. In 1986, Constance Demby Released, Novus Magnificat: Through the Stargate, which sold 200k copies and is the best selling new age record of all time. (It is not on spotify or in the library, sorry.) It references western hymns and spiritual music, is endlessly hopeful and has several slowly building climaxes and follows normal western musical form. The timbres she uses are very classic FM synthesis sounds. She rejects the label ambient.

The 90’s

Alex Patterson worked with Eno and DJed in chillout rooms. The did ambient stuff under the name The Orb. We listened to Montagne D’Or (Der Gute Berg), which I think sounds cool.

Aphex Twin, who claims not to have known of Eno, released Selected Ambient Works, which is in the library. We listened to Rhubarb.

Dark Ambient

Coming from ambient, but going the opposite direction as New Age, we get Dark Ambient. Some of Aphex Twin’s stuff is considered dark ambient. We listened to Tree.

Nurse With Wound is also a dark ambient group. They also do noise and drone. We listened to Spiral Insana 2, which is not on spotify of in the library.

And Robert Rich is also a dark ambient guy. We listened to The Simorgh Sleeps on Velvet Tongues, which is a very drone-y piece.

Drone

Drone has roots in Indian music and also in western music with folk instrumentes like the Hurdy Gurdy or the bag pipes. La Monte Young did drone stuff in the 60’s with his Dream House installation. It started as a high art, acoustic genre. Important acoustic droners include The Deep Listening Band (who do use electronics sometimes), and Ellen Fullman.

Phill Niblock has been doing electronic drone for a long time. We listened to a bit of Pan Fried Part 2. He builds up clusters of tones only a few Hz apart and uses many channels.

Eliane Radigue is a french drone artist who uses electronics. We listened to some of Elemental 2 which is not on spotify or in the library. And a bit of Maggi Payne who is similarly not available.

Adam Menzies, aka: _i is a drone artist who used to be a rave DJ. He was a strong believer in PLUR – Peace, Love, Unity and Respect, but the world got him down and now he does sad music. We listened to a few moments of thehammondanditelleachotherwhatwe’vebeenupto.

Eleh is a drone artist fo unknown identity who has a CD out now. We listened to HeleneleH

William Basinski is also a big influence of a lot of drone artists. His Disintegration Loops are literally the sounds of tape falling appart. The music is a recording of the process of destruction.

Dave Seidel is another drone composer. I quite like his work and it’s all up on his website for download.

Techniques of Making Drone

You can slow down samples. Shamantis showed that even Justin Bieber sounds good if you play him slowly enough. http://soundcloud.com/shamantis/j-biebz-u-smile-800-slower. He used a FOSS programme called PaulStretch, which you can get from the Mac download site or for Windows or Linux. The windows/linux page also has instructions on how to use it. (FOSS means Free and Open Source Software)

Or you can do some droning with SuperCollider. The following code example is extremely processor intensive:

b = Buffer.read(s, "sounds/a11wlk01.wav");


(
	x = SynthDef(\drone_buffer, { arg out = 0, bufnum, startFrame, dur, grainDur, rate = 1,
									amp = 0.2;
			
		var env, player, speed;
			
 		env = EnvGen.kr(Env.linen(0.01, (grainDur - 0.011), 0.001, amp), doneAction:2);
 		speed = rate *  BufRateScale.kr(bufnum);
 		player = PlayBuf.ar(1, bufnum, speed, startPos: startFrame);
		Out.ar( out, player * env)
	}).add;

)
(
	Pbind(
		\instrument,	\drone_buffer,
		\rate,		0.5,
		\bufnum,		b.bufnum,
		\grainDur,	0.05,
		\dur,		1/440, 
		\amp,		0.1,
		\startFrame,	Prout({ |evt|
						var frame;
						frame = 0;
						{frame < b.numFrames}.while({
							frame = frame + 3;
							frame.yield;
						})
					})					
	).play
)