Skip to main content

Tidal CyclistSaachi Kaup
akaSaachiKaup
CommentsClub Tidal Forum Thread

This year, I made a project involving Tidal Cycles and Mandalas for Summer of Haskell. As you may well know by now, TidalCycles (or Tidal for short) is a software for making patterns with code. It is used to create patterns of many kinds, from music and visualisations to dance moves for robots. Tidal uses a paradigm called Functional Reactive Programming (FRP) under the hood. This is useful for acts involving continuous time, including composing music and animations.

Mandalas

Mandalas are geometric designs, created with circles and repeated simple shapes. Weaved squares, concentric circles, intricate triangles and squiggly lines form the piece. They are common in south east asian art, showing up in temples and sand paintings alike. Today they are widely used, to the extent that you find them on shirts and household items.

All the Tidal visualizations I saw were linear. Notes playing forward with time. But mandala art remains in the same place. The shapes morph in place, accompanying the rhythms and cycles of the music. The periodic nature of music should reflect in its visuals as well. Thus came the project idea:

Map the underlying structures of Tidal to mandala patterns.

Background

From all that I found out about mandalas, they are general patterns. They are not really particular to India or any other country. Native American art has some mandalas, so does Tibetan sand art.

Tibetan monks take days to painstakingly make these sand paintings. After a while, they let the tides wash over them

Not unlike euclidean rhythms, the underlying structures make the art universal. However, their mathematical symmetry, though obvious, remains undefinable. Mandala art in computers is formed usually through an exploratory process, going back as far as we have generated graphics. Speaking of computers generating graphics:

Fractals

Some mandalas are fractal. It's an aspect of their underlying mathematics. Arthur C Clarke noted an odd coincidence when he wrote about mandalas -

"[..] but indeed the Mandelbrot set does seem to contain an enormous number of mandalas."

Mandalas and Fractals: Visual similarity is obvious.

Languages

Languages like Processing or libraries such as p5.js can produce mandala graphics.\ So why Haskell? No doubt, the heavy workload should be handled by languages more suited to the task. Even Haskell animation libraries are built as thin wrappers on top of the C animation library OpenGL.

But Haskell is particularly well suited to representing these abstract patterns. Mapping its type system with simple shapes could lead to varying results. How could it be used to map Mandalas onto the existing structures of Tidal Cycles?

Turtle Graphics

Turtle Graphics are a simple graphics system. Typical commands include move forwards, move left, right and so on. Some of you might be familiar with it from Python\'s Turtle graphics library. For the old school ones, Microsoft\'s Visual Logo might ring a bell.

MSWLogo Interface

Alex and I thought a simple turtle notation might be a good place to start. I explored animation libraries like Gloss, Reanimate and WorldTurtle. WorldTurtle seemed most suited to the task.

We integrated a basic Turtle Notation within Tidal\'s parser. Understanding the basics of monadic parsers proved useful. The system needed to be portable to other libraries, so we created an intermediate notation. Thus began the patterns.

Basic Patterns

“f”: pattern, moves forward with time

“f r”: Moves forward in the first half of the cycle, and then moves by 90 degrees in the second.

Mini-notation Magic

Tidal's Mini-notation is used for writing patterns of various sorts (notes, samples, parameters).

Internally, the mini-notation is parsed as a shortcut for a function. You could otherwise write it using longer function compositions.

“f <l r>”: Alternatively moves left and right in each cycle's second half.

“f [f f l]”: Starts forming Mandala like patterns.

But there was a problem.

The system is not real time. WorldTurtle's API does not give low level access to the time at which the pattern is produced. This leads to graphics that are only theoretically in sync with music. Gloss, on top of which WorldTurtle is built, does provide access to time.

There was also the problem of changing patterns in real time. But, by storing patterns in mutable, shared variables, we could handle this with threads. This is a work in progress.

Meanwhile, Some more patterns:

Ninja Star

"f l l [f r r f l l f r r] f l l": Mini-notation magic at hand, again.

Honeycomb

"f <l r> f <r l r>"

Demonic

append (slowSqueeze "1 3 1" ("[l f, f r]")

ChaosMap

slow “1 1 2 3 5 8” “f l l”: Everything is patternable.

At it's base, “f l l” on its own produces a simple triangle.

This pattern will slow down each cycle by the first pattern. Slow the first and second parts of the cycle by 1, third part by 2 and so on.

A friend who likes physics said it looks like Brownian motion.

You can find more patterns here

ANIMATION AND TIME

A talk by Conal Elliot on Functional Reactive Animation specifies what graphics systems do. They abstract the pixels away. Keep continuous space to work at a higher level. This allows for better composition too. You can scale and morph images without too much difficulty. The task is to use the same methods for time.

Regardless, some things I have learnt in this time: The rich variety of mandala art in many continents; \ L-systems that produce tree-like structures using grammars and \ Music theory: tones, scales, chord progressions and their mathematical underpinnings.

What Next?

There is the intuition we have that fields of knowledge are interlinked. That these patterns are present in many areas. But you can't work on intuition alone. So how do you confirm it?

Well, you can see it. As you watch the pattern form chaotic shapes on a screen, the connection is confirmed. These patterns still have a long way to go. An FFI could allow JavaScript libraries to produce the animations instead. The new version of Tidal could lead to a new world of possibilities.

However, the current system does show the structure of Tidal. The ChaosMap pattern, after going haywire in all directions, comes back to its original point. Seemingly random, until the very end when the pattern is visible. It showcases the underlying mathematical beauty at work. This was the central goal to accomplish.

Tidal CyclistGEIKHA
LocationBuenos Aires City
Years with Tidal4 years
Other LiveCoding envHydra, SuperCollider, Estuary
Music available onlineLive sets on YouTube, Snippets on Instagram
Code onlineGitHub
Other music/audio swFL Studio, iZotope RX, Reaper

About me

I was born in Buenos Aires, Argentina. Coming from an artistic family, I grew up learning about music production and image manipulation. And as an internet child (I was born in this millenium), I grew up chronically attached to the computer. I've developed most of my musical knowledge as a Hip-Hop producer and as a musical omnivore. However, nowadays, I've grown away from Hip-Hop to dive into UK Garage and Chicago Footwork specifically.

I went to a secondary school specialized on computer sciences, where I learned the basics of programming and software development. What I was taught was super-useful! But it was also very business-oriented, narrowly focused on making me a compliant worker. At some point, maybe when I was 15 I discovered SuperCollider and tried making sounds with it. It was hard for me at the time, and I wouldn't get much done, to be honest. 2 years later I discovered TidalCycles and FoxDot and was immediately interested in working with them. For that I have to thank Iris Saladino and some other now-friends who I went to see talk and perform at the University of Exact and Natural Sciences here in Buenos Aires.

My young age allowed to put lots of time into livecoding and the community. Since then, in just 4 years, it's been a pleasure to join the organizational side of things with the TidalCycles and Hydra communities.

Music

I call what I do a hybrid of Footwork & RKT (Argentinian Reggaeton). I've livecoded many styles throughout the years but I feel I've finally found a style unique to myself which I want to develop more and more. I'm inspired by:

What projects are you currently working on or planning? What's next?

Performances, performances and performances! That's my goal right now. Since I use samples of both local and international Reggaetón, I feel my music has a lot of potential on the local dance floors. I post snippets as Reels on my Instagram. However I'm considering doing a mixtape with some of the "songs" I've been coding these last 2 years! Ain't no footwork if I don't share those trax.

Links to recorded livecoding sessions:

What samples or instruments do you like to work with?

I practically only use samples. No synths here! I love to use samples from pre-existing songs. These samples might be looped vocals or instruments, vocal phrases or slices of the whole song. I enjoy coding new effects in SuperCollider to play with samples in unique ways.

I have a very personal set of samples: currently, I don't use a single sound from Dirt-Samples, although I'm planning to add some I remember fondly to my setup! I also use a specific sample-naming system that fits my needs.

Livecoding

What do you like about livecoding in Tidal? What inspires you?

Technical note:

I always say that I see Tidal as the most powerful sampler-sequencer in the world. The key to that is definitely its modularity. The purely functional aspect of Haskell and how Tidal has been built over it makes it so easy to create modular structures that link any Tidal functionality to any other one. I know nowadays we have ports such as Strudel, but the magic and simplicity of Haskell-like code is unbeatable for this purpose IMO.

The improvisation spectrum:

However, moving away from the technical aspects and going into the experience of livecoding, Tidal is also the fastest tool for me to go from complex musical ideas into sound. This may be confusing to some people, as it's infuriatingly slow to write a pre-thought melody on Tidal. Naturally, a guitar (for example) is infinitely fast on its thought to sound production. That is, in spite of only being able to play as much as 6 notes at once, and using practically the same sounds. I call this trade-off the improvisation spectrum.

On one side -the most common one-, we have fast-reaction, infinitely detailed, monophonic instruments. Livecoding is the complete opposite: it's low-reaction, discretely defined, and as polyphonic as you want it to be. But it's not only polyphonic as in "you can play more than one note at a time", it's also "you can play as many of any sound as you want, however you want".

For someone such as myself, a music producer, livecoding is the perfect instrument. I was never highly invested in any single instrument, I always cared and thought about music as a whole, as the intertwining of elements. And these are the ideas that I'm able to express with Tidal fast and on the spot. I'm live-producing.

The superhuman:

For some years I've noticed a pattern in the music I like (and in popular music too): The superhuman. That is, musical elements and expressions which cannot be reproduced by any single human. For example, autotuned perfectly pitched vocals, the accelerated r&b vocal runs in UK Garage, the pitched up vocals of a hyper-pop song, the slowed-down voice of a vaporwave song, the impredictable rhythms in glitch music. Well, there's definitely a superhuman aspect to Tidal-made music. The algorithmically complex rhythms that no human would be able to follow, the indeterministic randomness, the multiplexity of canons. That definitely inspires me!

How do you approach your livecoding sessions?

My approach lately has been quite structured. I've been doing "production" sessions where I simply explore ideas, add new samples, and basically "write songs" in a way. As for performance, I like to select a list of "songs" (pre-made code snippets) which I'll use as starting points throughout the performance. I start with something and do some changes to it and try to find an improvisational flow, if I can't find it, or if the flow gets cut, I simply transition to the following song. The transitions might be seamless or abrupt, depending on what I'm going for. I don't like to use Tidal's transition functions, so I also play a lot with evaluating code at the exact time: risky, but fun.

What functions and coding approaches do you like to use?

I'm a "do-block-er", I prefer to have all my Tidal code on the same block that I constantly re-evaluate, instead of writing each pattern separate from each other. Here's an example that resembles most of my code snippets:

do
hush
setbpm 150
let trans = note (2)
let note' n = note (scale "minor" n-3) |+ trans
let kb = slow 1 $ (rotR (0/8)) $ "1*2 1(3,8)"
d1 $ stack [ silence
,kb # "bd"
,"1*16" # "808hh"
]
d2 $ kb # note' "<0 -2>" # "bass"
d4 $ chop 8 "somemelodicsample" |+ trans

Using hush at the beginning of the do-block means I can simply comment out a pattern to silence it. However, this also means that if I have a runtime error in the middle of my do-block, everything after it will be silenced. Again: risky, but fun.

setbpm is a custom function that let's me set the BPM, as long as a 4/4 signature is being used:

setbpm bpm = setcps (bpm/60/4)

I want my code to be as short as possible. So I make use of some default Tidal behaviour, such as String patterns automatically being assigned to sound. I don't use struct unless needed, I simply write a pattern of 1 and Tidal takes it as the rhythm. I also use lots of abbreviated aliases for Tidal functions!

You can find more about the custom functions I use on the Tidal Club, where I always try to share my ideas:

Tidal Contributions

Purpose

The Tidal Cycles blog is intended to be by -- for -- about the Tidal community. Anyone engaged with Tidal Cycles is encouraged to submit a blog post. Topics can be about Tidal practices, music made with Tidal, live coding, event coverage, new developments & releases, community, etc. Topics can also be broader -- anything that would be of interest to this community, and it doesn't have to be limited to Tidal!

Templates

To make submitting posts easier, there are a set if templates. Each template includes a suggested set of content sections, but consider this just a starting point. The most important thing is to provide content that reflects your unique perspective.

Templates are maintained in GitHub in the tidalcycles/tidal-doc repo / templates branch.

We encourage posts to include:

  • code sections with Tidal examples
  • links into the Tidal user documentation
  • links to recordings, YouTube, Bandcamp, SoundCloud, etc.

Submission Instructions

Detailed posting instructions are included in the template files. Options:

  • Submit via GitHub pull request
  • Work with a blog editor and send your content via Discord DM or email.

Do what works for you!

Markdown

Submitting you content in markdown format is preferred, but it is not required. If you aren't familiar with markdown, no problem. Write your content and we'll take care of the rest.

Docusaurus, MDX and markdown enhancements

The Tidal blog is rendered in Docusaurus which uses MDX as the parsing engine. It supports more layout features including React components. To see the full list of options, check out the Docusaurus Markdown Features page. Here are some examples. There are many more!

Admonitions - triple colon syntax

tip

This is a tip and is called by the triple colon syntax :::tip. You can also customize admonitions.

caution

When using admonitions - be sure to add empty lines before and after your text lines.

Details element

Toggle to see more
This is the detail revealed. This is useful for a long code block, allowing users flexibility in how they read through your post.

Another "details" segment, with code:

Toggle for code block - (no div)
h1 $ s "sound"
h2
h3

Tidal CyclistViola He
akav10101a, sandpills
LocationNew York / Shanghai
Time with Tidal1.2 yrs I'm tidal baby
Other LiveCoding envSonicPi, FoxDot, Touchdesigner(?)
Music available onlineSoundCloud, Vimeo
Other music/audio swAbleton, Max/MSP
a photo of viola, looking down into a computer, with red projection graphics in the backgroundPhoto by Dan Gorelick

Livecoding

What do you like about livecoding in Tidal? What inspires you?

Making patterns and drum loops are my favorite things! I had a non-western percussion background,and using Tidal Cycles feels like wielding an algorithmic percussion instrument / sample chopper hybrid with a lot of space for surprises and ✨randomness✨, tickling my brain in all the right ways.

Tidal is also well-documented and accessible - removes the barrior of GUIs, DAW paywalls, and has an amazing community involved in maintaining, stewarding, and creating with each other. Livecoding videos are open-source tutorials by themselves. Simply watching videos of others using the same tool differently had taught me a lot - I owe much of my knowledge and practice to this community.

How do you approach your livecoding sessions?

I like to describe my livecoding approach as "structured improv". Creative freedom within constraints is what works best for my live performances. I aim for my music to be engaging enough for people who are not familiar with livecoding practices, yet not completely erase the code-like qualities - thus often bringing pre-written structures, basslines, chords, drums, and a clear arc in my head, improvising melodies and textures in between.

What functions and coding approaches do you like to use?

  • I like superimpose a bit too much. Detuning just a little bit. The danger. The drama!
  • Adding sine waves to modulate panning and filtering, like # cutoff (range 200 2000 $ sine) (honestly I'd modulate anything and everything).
  • Using mask to create simple composition structures.
d1 $ stack[
s ("bd*4"),
mask "[0 1]/4" $ s ("~ cp ~ cp"),
mask "[0@3 1]/4" $ s ("hh27*8")
]

Do you use Tidal with other tools / environments?

Lately I've been outputing everything from Ableton to simplify the mixing process. I use Tidal to send Midi notes to play custom synths, and route the rest (mostly chopped up samples) through blackhole, also into Ableton. I've also started to dabble more into Strudel and am having a lot of fun with it!

Music

Tell us about your livecoding music.

Grounded by film and theatre practices, and inspired by many genres of rock, jazz, pop, and electronic music, I'm always attempting to use livecoding as a narrative opportunity to build worlds through dynamic sonic ventures. I make joyful dnb and techno music that I'd like friends to dance to; and I also make textural blip blops, droney soundscapes, glitchy vocal mixes that might not be categorized as one type of sound. These two parts of me simultaneously exist, and I try to merge them as fit.

There was a period of my youth when I was obsessed with rock operas and concept albums. Listening through an entire album attentively, in order, for a curated experience presented new grounds for me, and is somehow, strangely, comparable to certain "algorave" experiences. Building my livecoding sets almost feels familiar, like making... computer opera? The events we organize in New York City usually feature 25-40 minute livecoded sets, and it's the perfect length for these conceptual experiments - more than a few tracks, less than a whole show, embracing the chaos of improvisation but never actually going out of control.

Algorave is a "rave", so it's also natural to compare livecoding with DJing techno, where the scene is underground, diverse, and innovative, and the music is hypnotizing, consistent, and layered - it's a sonic journey that never ends. I think of livecoding music and community similarly, except that the journey does end, after a really good arc for 25-40 mins.

How has your music evolved since you have been livecoding?

I never used to make electronic music at all, and livecoding has made it easier for me to dive deeper into other aspects of music production and performance. What's really cool about livecoding is that we really don't have to be binded by western tuning systems and music conventions, which continues to be a topic of interest for me. I'm also on a long, deep dive in finding different samples that can be mixed together, as well as sounds that are directly sourced from my life and my culture. I've been making music with machine sounds from the shop I work at, gongs and bells and traditional Chinese intruments chopped up in different ways, morning assembly music from my middle school years, field recordings, and a lot of materials that feel intimate and important to me.

What projects are you currently working on or planning? What's next?

I'm currently working on leading a livecode.NYC project in collaboration with Wave Farm to create a longer-form radio piece. I'm also starting to work with more non-livecode musicians and producers, trying to better record, produce and intergrate livecoding with other instruments. Hopefully I'll polish and release some music soon. Oh, and learning, teaching and raving to people about Strudel because browser-based Tidal has felt SO intuitive and accessible to introduce to my non-livecode friends (and they are important!!!).

About

Viola He is a Shanghai-born, Brooklyn-based interdisciplinary artist, performer, and cultural organizer. Their creative practices engage with DIY electronics, programming, dance/movements, and various time-based media, exploring pathways towards alternative structures, systems and interfaces.

Using algorithmic approaches to enhance, alternate, and obfuscate sounds and images, they work to explore pathways towards alternative structures, systems and interfaces. Viola often dreams about infiltrating digital spaces with physical bodies as tools for intervention, wielding their love/hate relationship with technology to challenge the rigid infrastructures around them. Viola is an organizing member of NYC-based collective Livecode.NYC, and has produced and participated in performance work in NYC, LA, Shanghai, Beijing, Austria, and more.

a photo of viola, looking down into a computer, with red projection graphics in the backgroundPhoto by Whitt Sellers

Here is a playlist of recordings made with Tidal. This is a collection of performances that have stuck with me that I continue to draw inspiration from. It is very eclectic, with many different styles - which I think is wonderful. Tidal, after all, can be used for just about any kind of music making, and we should all celebrate the incredible range of expression being done under the heading of live coding. Enjoy!

Artistrecording / performance
Yaxugabba improv - Algorave 10th Birthday Stream
This is one of my favorite Yaxu performances. It is fun, builds intensity and shows how to exploit just a few samples - gabba + cpu. He also live codes a small dancing robot. Cool!
Eloi el Bon Noii la sessió maleïda (The cursed session)
Eloi (the good guy) performed this on the Solstice Stream (Dec, 2022) and it burned up the chat with raves. It starts with a radical remix/cutup of the Led Zepplin classic, Kashmir.
Polymorphic Engine (Martin Gius)Codified Waves
Evocative and mesmerizing acousmatic music based on field recordings of electromagnetic waves manipulated in Tidal.
LinalabSolstice Night Stream
I really love the vibe here - how it starts in a drone like mode and slowly builds intensity. Nice!
ndr_brtsingle sample #4: gtr
Ndr has a whole series of tidal performances done "from scratch" and many use the "single sample" approach. The coding is minimalist but so expressive! bd, MI Clouds drones
Dan GorelickIn seven - TidalCycles arpeggio jam
Dan strikes me as the "Miles Davis" of live coding - oh so cool, but masterful and innovative.
Bernard Gray (cleary)av v0.1
Experimental electronic music - based on artworks converted to audio as a spectrogram. When the synths come in it will blow you away!
CNDSDSolstice Stream - 2021
Solstice Stream - 2022
The mixed media work of CNDSD is in a class all by itself. Enigmatic visual storytelling, haunting sounds and expressive live coding all together. Amazing. I love the morphing faces at the end of the 2022 set as the Euclidan beat takes over.
Relyt RAlbum Xuixo: Track 1 - Nondegenerate (33 EDO)
Relyt R wrote an incredible blog post about this xenharmonic music conceived in an intense techno style. Just read it and listen!
digital selvesEurostar
From the 2022 Bandcamp EP - error topography. Very cool groove, sharp glitchy rhythmic sounds and intense bass line. Well done!
Weekly RavePlaylist: 3/2023 -> now
Cleary & Joanq hold a weekly Rave jam session using Estuary. Sessions are streamed and archived. Lots of great collaborations. Check it out!

Bonus - Sardine!

Bubo (Raphaël Forment)Solstice Stream - Dec, 2022
Ok, so it's not Tidal - but Bubo comes from Tidal, and this is performance is sheer delight!

Tidal CyclistGhales
LocationNomad
Years with Tidal5 yrs
Other LiveCoding envp5.js, hydra, foxdot
Music available onlineSpotify, SoundCloud, Bandcamp
Code onlineGitHub
Other music/audio swReaper, Ableton, BitWig, Open Stage Control
CommentsClub Tidal Forum Thread
ghales

Livecoding

What do you like about livecoding in Tidal? What inspires you?

First, the community. From the moment I watched Kindohm play with code I knew I'd have to try that as well at some point. I started actually getting into live-coding by getting in touch with CLiC and by the UnB Media Lab. I've made friends in these communities and played a number of times with them.

Also, mini-notation, even at its current experimental state, makes a lot more sense to me than sheet for my current work. Mini-notation is fairly easy to write, plus it's easy (and fun) to combine functions for some very sophisticated manipulation. In particular I love the ability of using algorithms to shape music - as a composer it felt (since the very first time) like finding a missing piece to a decade old puzzle.

How do you approach your livecoding sessions?

With a creeping fear that something will break.

It really depends though on whether I'm playing by myself. When in a group (like Nômade Lab) I'll probably use flok (shoutout munshkr) as it's the most accessible platform, with vanilla tidal. I find that relying too much on custom code for collaborative performance is tricky. Plus it's relaxing to play in that environment, as it lifts the pressure off and becomes an exploratory quest for interesting sound instead.

When playing alone, I aim very high: my goal is for the music to be interesting without the knowledge that it's done by code. If an audience can't vibe to it, I'm doing it wrong. This is difficult because general audiences are used to very engaging, dynamic and tightly composed music, which is very hard to pull off with code (especially through live-coding). From experience, either you bring ready-made compositions, or you live-code using some very high-level custom functions.

What functions and coding approaches do you like to use?

Here are some guiding principles I grew to follow:

  1. Splitting code between a palette and a canvas. The palette is a set of definitions at the top of a song file - time signatures, tonal keys or even custom functions. Each following block is a section of a song, which I usually tag using -- @name <name> so that the comment gets picked up by verso.
  2. Using code where it pays off - fiddling with numbers / fine tuning variables with is a very poor UX. It's time consuming and unrewarding. Using a separate controller for variables and using code to consume them makes a lot more sense personally. Recently I've been into midi controllers for that purpose.
  3. Avoiding logical constraints - even without plans to use the full capacity of tidal, I like to trust the platform I'm using to be suited for whatever exploration I decide to do. For this reason I've been exploring (and looking for solutions) for anything which seems hard to do in tidal. The hardest seem to be controlled (arrangement-based) transitions, time signature changes, long sequences with looping/non-looping parts and tonal modulation.

Nowadays, I only really use custom functions for controlling tonality and time signature changes. My custom function repo is basically this.

For tonality, I made a function called k (key) which is used as k <index> <pattern> throughout my pieces. The first argument is used to refer to a key (a combination of a root and a scale mode) by index, which can be set using setkey <index> <root> <mode>. That makes modulating and exploring tonal fields very comfortable. I'll also sometimes use setI to store a pattern and refer to it later with the ^ operator which is very convenient.

Putting this together, a lot of my song code will look like:

do
-- palette
setI "theme1" "0 1 2 3"
setkey' 0 "d" "major"

do
-- @name section 1
p "test" $ (note . k) 0 "^theme1"

-- for brevity I use `nok` which is exactly `note . k`

do
-- @name section 1
p "test" $ nok 0 "^theme1"

I also rely heavily on all. One trick in particular really helps, which is to apply transformatiopn (ie. chop or timeLoop) via all but leaving them bypassed. It works quite well to use a function to switch between id and the transformation. This way, if you assign the control parameter to be a midi control (such as a button) you suddenly have a highly interactive performance element to use anytime. Using this trick with timeLoop is 👌 by far my favorite thing in tidal yet.

In the future I aim to use all + timeLoop extensively. It really changed the game for me, and I feel that it's a very good way of steering a composition in tidal

Do you use Tidal with other tools / environments?

I've used a number of different synths and softwares over the years. Today, it's a combination of TidalCycles, Reaper, U-He Diva and Sitala, plus a midi controller on the side (midi fighter twister). No SuperDirt samples or synths at all.

I find this strikes a nice balance for me, allowing for:

  • Record song midi and manipulating it later
  • Changing Synth Presets using code - a big thing for my performances
  • Recording audio in a familiar and lightweight DAW
  • Having access to a huge preset library
  • Using a single synth for all instruments

I was into hardware synths for a while, but eventually realised that for the requirements above there are really very few options. The only one I could find was the Virus TI, which is sadly very expensive and hard to find (and use). Plus, it became clear the only think I get out of a hardware synth is the controls - the rest can easily be achieved by software IMO.

Tidal Contributions

How do you contribute to Tidal Cycles? What have you worked on?

I consider verso to be my main live-coding contribution, even at its current stage. It's an algorithmic music-making interface which can be used with different live-coding languages. Right now it supports tidal and I'm moving on to Jaffle next. I really like the concept and will continue to develop it as I have the time.

I'm one of the founders of the Algorave Brazil community. Live-coding has been a things in Rio de Janeiro and São Paulo for years, but there wasn't a national community chat until recently. The group is now self-managed and regularly brings newcomers to the FoxDot, Sonic Pi, PureData and TidalCycles communities.

I like participating in the club forum, proposing and pitching in on changes, ocasionally helping newcomers find solutions. Whenever I find something exciting I try to share it there as well - such as Open Stage Control. When playing with hardware synths I eventually came up with mappings which I shared there as well (model:cycles).

What motivates you to work on Tidal?

Tidal strikes me as the best approach out there for generating music with code. It's extensible, compact, effective, plus it has a very good community behind it. There's a number of things I wish worked differently, and the underlying haskell makes it particularly difficult to change/update, but all in all it's a great tool and I like to help improve it whenever I have the time.

Music

Tell us about your livecoding music.

I like making music that doesn't sound like it's done by code. If I can trick the listener I'm doing something right. I'll also throw in some organic elements in my studio works for good measure.

One trait of my music is stacking loops with different lengths. If done right, it creates an illusion where it's almost unclear when a beat starts and ends. It also makes very simple patterns go a long way, since they will combine at different points with different intervals to create something new and unexpected. A great example of this is Girassol which features two piano melodies stacked. As they shift against each other, different melodies emerge. All in all I like using code to achieve levels of structured randomness and algorithmic patterns for which sheet music is not suited.

At around 17, I started listening to Meshuggah and got really into Math Metal / Mathcore. One thing that struck me was their cyclic patterns, which are easier to describe than to formalise through sheet. One particular instance is Sonic Pi's ring, whereby two sequences (pitches and lengths) with different element counts can be combined into a third sequence which loops with a much longer length. Another example I tried to code multiple times is Perichoresis by Ishraqiyun - they refer to ring as Tessellations or just Geometric Patterns. At the time I did not realise these compositions are algorithmic. I started developing my own notation system to achieve similar things with bandmates. During college I learned about tidal - code turned out to be the perfect tool to code these songs that have this special personal meaning.

How has your music evolved since you have been livecoding?

I've been taught to live-code by Alexandre Rangel and Joenio (aka. @djalgoritmo). I also learned a lot from the University of Brasilia (UnB) Media Lab and the Nômade Lab collective.

ghales

I started releasing on my own with Pragma to learn about releasing music on Spotify and other streaming platforms. It was an live improvisation, live-recorded EP with 3 songs. Pretty generic ambient techno inspired by some Aphex Twin.

I consider Isohedra to be my first proper algorithmic album - it was launched at the start of the pandemic. At that point I was heavily invested in superdirt samples and synths. Also, trying to always do things the "live coding way" - as in, not relying on DAWs or hardware. The album is very loungey and geometric, and uses some very rudimentary timbres, but to this day I quite enjoy its use of fading arrangements.

The next release - Memento - was really big for me. It was my first release with lyrics and vocals. At this point I was using an Elektron Model:Cycles - which I did for years - for drums, basses and synth sounds.

Sino was a song that really changed a lot the direction of my music. It threw me back to the drum grooves I used to love but stopped listening to years ago. I realised that's something I wanted to do a lot more.

For live settings, I try to reinterpret my own music into something that fits the audience. I played a techno set in Buenos Aires through NBTR with my friends Persik, Fakin and Alther. For that set, I sliced together songs from Memento and Isohedra and threw drums on top - it worked. Months later, I started playing hip-hop gigs with Kaleb, a talented pianist and singer friend.

What samples or instruments do you like to work with?

I've used a number of different synths and softwares over the years. Today, it's a combination of TidalCycles, Reaper, U-He Diva and Sitala, plus a midi controller on the side (midi fighter twister). No SuperDirt samples or synths at all. I use organic drum samples cause it good 👍🏼

I've bought and downloaded a few sample packs from Pocket Operators, the Model Cycles, etc. In particular, Wavparty has been an amazing resource for that!! But in the end I resorted back the a comfortable combination of recorded drums and softsynths. It works best for me.

What projects are you currently working on or planning? What's next?

Prece is my latest work - it's being released as I write this doc 😁

Prece is another instrumental album, but with much more polished sound. I used all the synths I owned on this: a Yamaha Reface CP, an Audiothingies MicroMonsta plus the Model:Cycles. It also features recorded guitars and sampled drums. It's been produced by Jota Dale and released via Torto Disco. It's my longest release - One hour, 10 tracks long - and features the best production quality I've had yed.

Later this year I'll be releasing Include, a video series of live performances featuring six musicians I really admire from Brasilia - picture Live-Coding "Sofar Sounds". It's been in the works for two years with a large production team and we're very excited about the result.

After that frankly I'm just taking a break. I've left some songs saved for whenever I want to make music again, but at this point I'm focusing a lot on my career as a software engineer. Music making takes a lot of time and effort, and I really need to pick my battles at this point, especially considering I make music for free and it doesn't pay my bills.

My music is released through Torto Disco and can be found on their website. A full and up-to-date list of releases can also be found on my homepage. You can also hear me on your favorite platform via the links below:

Spotify

Soundcloud

Bandcamp

Find me on bandcamp here

Live Sessions

Tidal MusicianRelyt R
akaR Tyler (https://instagram.com/1000instamilligrams)
LocationSan Francisco, California
Album/ReleaseXuixo
GenreXenharmonic, Techno, Algorave, Microtonal
Availablebandcamp, Spotify, Youtube
Release DateJune 28th, 2023
CommentsClub Tidal Forum Post
Xuixo Album Art

Introduction

We've been listening to music with the same 12 notes (C, C#, D, Eb, etc.) for hundreds of years, thanks to 18th century Europeans. Times have certainly changed. Why does our music not reflect that? A small but active contingent of artists recognizes and challenges this status quo and creates microtonal music using notes without analogs in 12-tone equal temperament. The name given for music composed with alien, non-12-tone harmonies is 'xenharmonic'.

Xuixo, a 6-track EP released on the xenharmonic label split-notes is the first studio EP by Relyt R, the alias I created for microtonal algorave. Relyt R is a focused split from my other algorave work as R Tyler. The Xuixo EP features algorithmic and machine learning enabled techno and dance music using non-Western tunings. 19-, 21-, and 33-tone equal temperaments were chosen to synthesize alien harmonies and melodies with notes exterior to the common 12.

My motivation for incorporating live-coding, machine learning, techno, and xenharmonic scales is to imagine how music in the future may exist--with a radically different sonic palette. The intention behind Xuixo EP was to use this set of digital tools and custom code bases to evoke wonder about an algorithmic future. Perhaps our future will be alien and dystopian. If so, a fast, brutal, and bizarre xenharmonic techno soundtrack would be fitting.

Track Highlights

I'm going to highlight code and methodology behind three tracks:

  • Nondegenerate
  • Three
  • 10 Megakelvin

Nondegenerate will be the longest description as I'd like to explain the microtonal setup used throughout the album.

"Nondegenerate" (33 EDO)

Channeling a dystopian sci-fi rave, the sub-heavy techno track Nondegenerate opens the album at 170 BPM and 33 notes per octave (EDO = equal divisions per octave).

The microtunable VST Arturia Pigments is used for all synth sounds, including the arp, chords, and sub bass. This list of microtunable VST synths on the xenharmonic wiki is how I first heard about Pigments. To microtune Pigments I used Sevish's scale workshop, exported a .scl file for 33-EDO, and loaded it into Pigments. To create this tuning file, the steps are:

New Scale -> Equal Temperament -> "Number of Divisions" = 33, "Interval to divide" = 2/1 -> Export Scala scale (.scl)

On this track I controlled Ableton Live with TidalCycles via MIDI and recorded the results. This track was not performed and recorded live--clips from TidalCycles were pieced together over a DJ-friendly arrangement structure. The rigidity of 8- and 16-bar arrangement structure seems to be foundational or omnipresent in (Western) dance music so I wanted to enforce that structure for this piece.

The way I played and composed the arpeggio in TidalCycles is with several custom functions I wrote (and one from polymorphic.engine). They are constructed from base TidalCycles functions nTake, toScale' (for non-12-tone scales), and segment. Essentially, I use a custom function takeArp' to map a math function to a microtonal scale and construct an isorhythm out of it.

A little more detail before I share the code:

  • I start with a mathematical trigonometric function of time y(t)
  • quantize it to a certain number of samples {t} with segment
  • map the values {y(t)} to an ordered cycle of pitches in a scale (embedded in a 33-note chromatic scale) with tScale'
  • use state memory (with nT derived from nTake) so that everytime a rhythmic onset is encountered and scheduled, the next note is taken from the cycle of pitches, creating an isorhythm.

Here is the code to make takeArp':

let
-- allows writing patterns (pseudo-patterns) instead of lists.
-- useful for `nTake` and `toScale` family functions.
patternToList pat = map value $ sortOn whole $ queryArc pat (Arc 0 1)
-- toScale' but with pseudo-pattern syntax
-- zEDO is the EDO; the number of notes in the non-12 chromatic scale.
tScale' zEDO scalePat pat = toScale' zEDO (patternToList scalePat) pat
-- nTake but with pseudo-pattern syntax and a number to take.
-- requires a name for the state counter.
nT name amt p = nTake name (take amt (cycle (patternToList p)))
-- the 6-argument function that combines everything above.
takeArp' name amt zEDO scalePat segAmt func =
nT name amt $ tScale' zEDO scalePat
$ fromIntegral <$> round <$> segment segAmt (func)

Then I can convert trig functions into scale-based state-memory arpeggios:

d1 $ struct "t(13,16)" $ takeArp' "nondegenerate" 9 33 
"0 3 8 12 22 24" 15 (slow 3 $ range (-5) 8 $ sine*sine) #
s "midi" # midichan 1

This takeArp' function lets you dramatically alter the melody by changing:

  • the trig function
  • its numeric range
  • its frequency (with slow or fast)
  • its segmentation
  • the scale itself
  • the number of values stored in the nTake counter
  • the rhythmic onsets (specified here using struct)

This is not the exact code I used for the melody (I lost the code with :q! in vim) but it is very close.

Microtonal structure and production

I'll briefly go over chords, bass, and production before highlighting the next two tracks. This section goes into a bit of microtonal theory, then plug-ins and techniques used for production.

  • The chords stabs have the pitches [0, 4, 9, 14, 22] in 33-tone, so root, neutral second, Just minor 3rd, perfect 4th, minor 6th. It's a kind of a second-inversion minor 7 with a neutral sixth. I find the EDJI ruler to be very helpful for learning a new temperament. I also use some custom Python tuning tools I made to convert 12-EDO pitch classes to non-12-EDO approximations, but because of the neutral sixth, this chord is unlike any found in 12-EDO.
  • The bass pattern is simple, with a steady stream of 16th notes except there are no 16th notes on the quarter note onsets where the kick drum plays. This makes the kick and bass sound more like a single instrument and helps with mixing. Here's the pattern visualized on a piano roll:
piano roll bass
  • The melodic contour in this bass ostinato uses theory from Lerdahl and Jackendoff's Generative theory of tonal music, namely the 4 1 2 1 3 1 2 1 pattern found in music and linguistics (refer to G. Toussaint's 'Geometry of Musical Rhythm' for an accessible intro). TidalCycles mininotation makes this almost effortless:
-- bass melody
n "~ 0 0 <<7 5 > 3>"
  • Regarding the microtones in the bass melody, the notes divide 2.5 semitones (seven 33-EDO steps) into four pitches so the melody is quite microtonal yet still perceived as four distinct pitches. To add an interesting timbral effect, I layered two bass oscillators, with the second pitched 15 33-EDO steps apart (545.5 cents, an approximation of the 11th harmonic). This harmonization really makes the bass shine and sound cool even on trashy speakers. It's almost like additive synthesis with an extra 11th harmonic. I fnd the harmonic series to be an indispensible reference when sound designing percussion and bass.

  • For mixing and production, I used drum bus limiting, multiband sidechaining, mid-side EQ, a mastering chain with the stock Ableton limiter, Rift by Minimal Audio for distortion on the chords and hi-hats, and Output Portal for delay effects. For sub and bass compatibility, I followed Slynk's recipe for making sub bass sound good on any sound system.

"Three"

The experimental club track "Three" was my first production after I coded a machine learning tool in Python. I named it WAV Clustering Workflow (WCW) and I use it for clustering drum samples by acoustic similarity. I used WCW to cluster 18000 vintage drum machine samples from kb6 and browsed the generated file folders corresponding to clusters (see the WCW readme.md for more info). One of the cluster folders in particular was full of insane laser sound effects, so I simply played through them, in order more or less (hierarchical clustering means within a cluster, sounds are further sorted by subclusters). I found a similar folder with short closed hi-hats. To play the sounds I drag 128 samples at a time in an Ableton drum rack then play them in order with:

fast 16 $ slow 128 $ n "0 .. 127" # s "midi" # midichan 1

In Ableton's drum racks you can assign 'choke groups'. This allows you to mute samples when another sample from the assigned group triggers. This prevent samples from bleeding into each other, and is just like using cut in TidalCycles for audio samples in SuperCollider (a trick I learned from Kindohm).

  • For the dynamically stereo-panned stream of ultra-compressed bass notes at around the 1:20 timestamp, I actually play all 128 sounds in order, and kept all the samples that came out of WCW. It's the sound of a sweep through neighboring notes in a cluster of kick drums in acoustic latent space--very satisfying and wild sounding. I added OTT (Ableton Live Multiband dynamics) at 100% (haha) then I added binaural panning using the Envelop max4live devices on this and other instrument tracks, with LFOs controlling the X and Y coordinates.

  • I also used a NEJI tuning (near equal just intonation, a concept I learned from Zhea Erose in the Xenharmonic Alliance discord) using my NEJI calculator to export a scala file for the wobbly vocal-like chord that's played in bursts of 7 (starting at 0:03 timestamp).

  • In a slower track like this, using groove or swing is really helpful. I do this by patterning nudge in TidalCycles or by using Roger Linn's MPC 16th note grooves in Ableton). The overall composition isn't that crazy, but it's the machine-learning for sound selection and overall contrast that makes it interesting.

"10 Megakelvin" (21 EDO)

This track was fully live-coded in TidalCycles with minimal or zero tweaks after recording. I used a 21-EDO .scl file from Sevish's scale workshop and microtuned several instances of Arturia Pigments, similar to how I set up synths for "Nondegenerate" above and for other tracks on the album. I decided to use an 18-beat rhythm because it's close to 16, and it's still an even number, so it's still amenable to head-nodding and/or dancing.

Saying no to twelve notes and no to 16 beats resulted in something incredibly bizarre. When I began this production, I was inspired by the sound design of the late producer Qebrus. But what I arrived at was completely different. The TidalCycles code for this track is about 100 lines. It makes ample use of the non-default TidalCycles function ncat written by pulu on the TidalCycles discord:

let ncat = seqPLoop . go 0                                                        
where
go _ [] = []
go t_acc ((t, p):ps) = (t_acc, t', p) : go t' ps
where
t' = t_acc + t

It's basically cat but you specify how long the subpatterns last (see code below for usage). ncat allows me to spread a bunch of wild and contrasting sounds over a long cycle, and it's fun for improvising because you can change how long any one of the subpatterns lasts, and doing so shifts all the other patterns. In "10 Megakelvin" I use ncat to interweave drum samples from from the Modular Drums from Mars collection together with extremely sci-fi microtonal chords. I've found the main utility for this kind of horizontal sequencing and concatenation is it makes things more monophonic and musical (one idea at a time). I find regular cat to be maybe too predictable or constant. Here is most of the code I used for "10 Megakelvin":

setcps(70/120)

-- pitch notes down by 24 semitones for ableton drum racks so 0 = C1
let drumz = (|- n 24)

-- sparse modular drum sounds and xenharmonic arpeggios
d1 $ every 5 (|+ n 3) $ mask "~ t t ~" $ ncat [
(1.5, n (tScale' 21 "0 5 7 12 17 28" "0 .. 17") # m 2 ),
(0.5, drumz $ struct (timeline [5,3, 3, 7]) $ nT "mdfm" 14 "0 .. 14" # m 3),
(0.5, n (tScale' 21 "0 5 7 12 17 28" "0 .. 17") # m 4),
(1.5, drumz $ struct (timeline [5,3, 3, 7]) $ nT "mdfm" 14 "0 .. 14" # m 5),
(1, n (tScale' 21 "4 5 8 12 17 28" "0 .. 17") # m 2),
(1.0, drumz $ struct (timeline [5,3, 3, 7]) $ nT "mdfm" 14 "0 .. 14" # m 3),
(0.5, n (tScale' 21 "0 5 7 12 17 28" "0 .. 17") # m 4),
(1.0, drumz $ struct (timeline [5,3, 3, 7]) $ nT "mdfm" 14 "0 .. 14" # m 5),
(0.5, drumz $ struct (timeline [5,3, 3, 7]) $ nT "mdfm" 14 "0 .. 14" # m 1)
] # amp "0.6 0.2!5 0.6 0.5!5 0.6 0.5!5"


-- 18 beat two step rhythm. 1 = kick, 8 = hi hat, 3 = clap
d2 $ drumz $ n "1 ~ 8 ~ 8 ~ 8 ~ 8 [1, 3] ~ 8 <~0> 8 ~ <~[8,0]> ~ 8 " # m 1


-- bass, toms, sci-fi chords, drum break with bongos
d6 $ every 5 (rev) $ every 9 (mask "~ t") $ every 7 (fast "<1.0 1.0 1.00 1.0>") $ stack [
ncat [
( 7, struct "~ t t ~ t t ~ t t t t ~ t t ~ ~ t t" $ nT "c" 4 (tScale' 21 "0 3 6 9 12" "0 .. 8") # m 6),
(1, n (tScale' 21 "0 3 6 9 12 15 18" "[0, 1, 2, 3, 6, 7, 8]" |+ "<5>") # m 7),
( 7, struct "~ t t ~ t t ~ t t t t ~ t t ~ ~ t t" $ nT "c" 4 (tScale' 21 "0 3 6 9 12" "0 .. 8") # m 6),
(1, n (tScale' 21 "0 3 6 9 12 15 18" "0") ),
(1, n (tScale' 21 "0 3 6 9 12 15 18" "[0 2 6 8](12, 18)" ) # m 8),
( 1, struct "~ t t ~ t t ~ t t t t ~ t t ~ ~ t t" $ nT "b2" 4 (tScale' 21 "0 9 6 3 0" "0 .. 8") # m 6)
],
mask "t t" $ mask "<~t> ~ t <t ~ ~ ~>" $ every 7 (|+ n 1) $ drumz $ n "[<10 8 8 8 8 8 8 8>*2]!9" # m 1,
-- sliced up acoustic drum break with bongos
drumz $ every 5 (|+ n 12) $ (|+ n 2) $ n "0 .. 17" # m 9
]

-- drum break only
d9 $ drumz $ every 5 (|+ n 12) $ (|+ n 2) $ n "0 .. 17" # m 9

Production Workflow

I'll conclude this section with some notes on my production workflow.

I tend to mix, compress, and limit as I'm composing and coding. I use a technique called Brauerizing where I group different instruments (drums, basses, melodies, harmonies) and compress and limit each group individually. Then I compress and limit on the master bus. This glues the sounds together hierarchically and makes all the elements interact dynamically. I almost consider it part of the composition because you need to consider: how much do you want your independent signals to overlap, where do you want negative space, etc.

  • This track 10 Megakelvin is unusual because I didn't use any distortion, just heavy amounts of compression and a little Valhalla Reverb. For ear candy, I put an unsynced LFO on the cutoff frequency of a low-pass filter on the acoustic drum break--this technique helps make loops sound less repetitive and makes the whole track sort of wash and swell.
  • On the hi-hats I use a free max4live device called 'Granular Mirror Maze', which I heard about from a reddit AMA with Max Cooper. It adds to these drums a really unique metallic sound that's distinct from normal stereo delay with feedback.

About Relyt R

Relyt R is my new alias, the alter ego of Silicon Valley algorave artist and AV Club SF performer R Tyler. While R Tyler is influenced by jazz, prog, house, IDM, classical, and videogame music, Relyt R is a compartmentalized alias for xenharmonic techno at higher BPMs, alien and futuristic sounds, and brutalist sound design via machine learning.

Xuixo is my first release under this new alias, and I am fortunate to have had it released on Sevish's xenharmonic label split-notes. I have been producing xenharmonic dance music since 2017 and live-coding music in TidalCycles since 2018.

Beyond the topics in this blog post, I am captivated by 3D art, molecular biology, and sea creatures. I'd like to thank the friends who have helped me along the way to this release, especially those who acquiesced to offering an initial vibe-check and listened to my EP when it was still a demo.

DeveloperIván Abreu
Source codeGitHub
Visualizing ApplicationProcessing
Blog postHighHarmonics

Introduction

Didactic Pattern Visualizer (DPV) is an easy way to visualize sound patterns from Tidal Cycles. It was created by the artist and creative technologist Iván Abreu "...to study the potential and complexity of the syntax of the pattern system for sequencing Tidal Cycles sounds." It utilizes the open source visualization program Processing to provide a scrolling grid where colored shapes appear in rhythm reflecting the flow of Tidal events (notes). The GitHub materials also includes Tidal Cycles examples using DPV by the musician and digital Artist CNDSD.

To use DPV (summary):

  • Install and configure the Processing application to receive OSC messages from Tidal
  • Load the OSC and Tidal configurations each time you use it (or load it with your BootTidal.hs)
  • Set the scrolling grid parameters for your Tidal session
  • Add a connection parameter to each pattern you want to visualize

Installation

The GitHub source includes a detailed installation/configuration guide. The main step is to install the Processing application and add the oscP5 library file. You also need to download the Processing runtime pde files that make up the DPV codebase.

OSC targets

DPV leverages the ability of Tidal to send OSC messages to multiple targets (which is covered in the Tidal OSC docs.) DPV listens to OSC messages on port 1818. With the dual targets, every Tidal channel that has the "connectionN" parameter set will display the visual representation of notes.

Examples

The Readme page includes an good set of examples that include Tidal code along with mp4 files that play the audio with visualizations. There is also musical examples and code provided by the digital artist CNDSD - well know for expanding boundaries in live coding and interdisciplinary art forms.

Usage

In the ReadMe, Iván notes that there are multiple ways to use DPV:

  • As a tool for composing - for the visual feedback of ordering and sound intentions.
  • During live performance, to help unfold the musical structure and then emphasize and direct attention to rhythmic interactions of multiple sound layers.

Creative Example - composed live code with visualization

The example below shows how I used DPV to support composing prepared code with rhythmic patterns that use cross-rhythms, polymeter, and irregular beat patterns. I found it to be really helpful to see exactly what is happening within the cycles and observing how the note placements change as I make small adjustments to pattern values.

Description

Erratic Rhythms has 4 separate parts, each with its own distinct rhythmic character. The patterns were created so that each part stands out without "lining up" on the beats. The piece evolves so that the parts are played in different groups of 2 and 3 parts sounding together. Each part has a different timbre, using synthesizers available in SuperDirt (superhex, psin, supergong, soskick).

The organizing idea is to have fully independent parts - each with a distinctive character - that still work well together. To ensure part independence, I keep the rhythmic values of each part sounding in different parts of the beat. That is where the visualization and DPV really helped. During the stage of code preparation, I would experiment with different pattern values and closely watch the visualizations to see where the rhythms land, and then make adjustments to find the right values. During a performance session, I improvise on the prepared code options and use the visualization to give me a sense of how everything fits together and what I should do next.

Examples - Erratic Rhythms

1 play

Erratic rhythms - visualize ex 1
  • d1 (lower part): 8 beat pattern on the beat with regular subdivisions
  • d2 (upper part): 9 note pattern using a polymetric subdivision value of %5.2 and nudge 0.2
d1 $ freq "[70 ~ 800] [<500 ~ > < ~ ~ <300*2 300*3> > [1170 ~ 900]]" # sound "superhex"
# connectionN 4 # sizeMin 12 # sizeMax 80 # figure "rect" # color "0519f5" -- DVP OSC values

d2 $ freq "{100 200 400 800 900 1100 1300 1500 1600}%<5.2>" # sound "psin" #nudge 0.2
# connectionN 3 # sizeMin 12 # sizeMax 60 # color "8905f5"

2 play

Erratic rhythms - visualize ex 2
  • d2 (lower): 9 note pattern, with polymetric subdivision value of %7.4
  • d3 (middle): )17 note pattern with different metric divisor values [supergong!17]/<3.4 5.2 1.2> pattern speed changes with each cycle
  • d4 (upper): 3 notes against 5 beats with notes offset with rests
d2 $ freq "{100 200 400 800 900 1100 1300 1500 1600}%<7.4>" # sound "psin"

d3 $ mask ("1 0 1") $ s "[supergong!17]/<3.4 5.2 1.2>" #nudge 0.2
# connectionN 2 # sizeMin 10 # sizeMax 20 # figure "circle" # color "2df505"

d4 $ freq "~ 400 ~ 800 [~ <1300 1600> ~ ~]" # s "soskick"
# connectionN 1 # sizeMin 12 # sizeMax 80 # figure "circle" # color "f58711"

3 play

Erratic rhythms - visualize ex 3
  • d2: 9 note pattern with polymetric subdivision of 16
  • d3: 17 note pattern with alternating polymetric subdivisions %<1 1.4 0.8>
d2 $ freq "{1100 200 400 800 900 1100 1300 1500 1600}%16"  # sound "psin"

d3 $ mask ("1 1 1 0 1") $ sound "[supergong!17]/<1 1.4 0.8>" #nudge 0.2
#connectionN 2 #sizeMin 10 #sizeMax 20 #figure "circle" #color "2df505"

4 play

Erratic rhythms - visualize ex 3
d2 $ jux (rev) $ freq "{100 200 400 800 900 100 1300 1500 1600 1800 2100 2400 ~}%11"  # sound "psin"
# connectionN 3 # sizeMin 12 # sizeMax 60 # color "8905f5" # nudge 0.2

d3 $ jux (rev) $ sound "[supergong!17]/<0.6 1>" # nudge 0.3
# connectionN 2 # sizeMin 10 # sizeMax 20 # figure "circle" # color "2df505"

d4 $ fast 0.5 $ every 2 (degradeBy "<0.2 0.5 0.8>") $ freq ("~ 400 ~ 800 [~ <1300 1600> ~!2]" |* 0.5) # s "soskick"
# connectionN 1 # sizeMin 12 # sizeMax 80 # figure "circle" # color "f58711"

So that's it!

Check out Iván's Didactic Pattern Visualizer

HighHarmonics

Tidal CyclistHelen Papaioannou
akaKar Pouzi / Papaloannov
LocationYorkshire, UK (currently Sheffield…soon to be…somewhere else in Yorkshire!)
Years with Tidal4 yrs intermittently
Music available onlineYouTube, BandCamp
Other music/audio swBaritone sax, synthesizers, Ableton, bells, toys, games, scores

Livecoding

What do you like about livecoding in Tidal? What inspires you?
For me, Tidal is a super-fun environment that affords many possibilities and surprises, right from the outset of starting as a beginner. I enjoy the feeling of being able to make changes with intention and the musical surprises that arise from unexpected interactions with functions, misunderstandings and errors. I also like that it’s relatively easy to start making music even with limited experience of different functions and syntax.

How do you approach your livecoding sessions?
I use Tidal for a variety of purposes, sometimes for live situations, sometimes to create and record music that I then layer with other sounds (e.g. performing with baritone sax or synthesizers) and produce into tracks. For me, Tidal is a good tool to come to when I don’t have or want a clear idea of what I want to do; even small changes to patterns can lead you down a musical rabbit hole you didn’t foresee.

What coding approaches do you like to use?
At the moment I’ve been mainly having fun in Tidal by working with one sample, or a very small palette of samples. I have a lot of fun with limitations. Here’s an example of doing something super simple with the same sample, using sometimesBy & silly increments of ‘fast/slow’, or random speed ranges, and sequences of the same sample repeated a different amount of times…I like to push things until they break & often they do! I’ve seen other people doing fun stuff with inverse patterns which I also often use.

let
inverse 0 = 1
inverse 1 = 0

do
let pat = "[1 0 1 0 0]"
d1 $ gain pat # s "ehit" # up "<-12>" # cps 1
d2 $ gain pat # s "ehit" # up "<-7>"
d3 $ gain pat # s "ehit"
--- ...

do
let pat = "[1 0 1 0 0]"
d1 $ sometimesBy 0.3 (fast "0.99") $ gain pat # s "ehit" # up "<-12>" # cps 1
d2 $ sometimesBy 0.3 (fast "1.001") $ gain pat # s "ehit" # up "<-7>"
d3 $ gain (inverse <$> pat) # s "ehit"

--- ...

do
let pat = "[<1*11 1*12 1*13> 0 <1*10 1*14 1*16> 0 0]"
d1 $ sometimesBy 0.4 (palindrome) $ gain pat # s "ehit" # up "-12" # cps 1
d2 $ sometimesBy 0.4 (palindrome) $ gain pat # s "ehit" # up "-7"
d3 $ sometimesBy 0.4 (palindrome) $ gain pat # s "ehit"

Starting from patterns of Greek dances, like hasaposerviko, make for fun improvs which could go anywhere

d1 $ s "[~ ebd ~ ebd, ~ clap ~ clap:10, ~ <met:4?>, [fing ~ clap]*4]" # pan (rand)
d2 $ loopAt 4 $ s "hv" # n (irand 20)
d3 $ s "[[zouki2*2 zouki2]*4]" # n (irand 30)
d4 $ every 4 (|+ up "7") $ up "[-5 2]" # s "BruBass:2"

Do you use Tidal with other tools / environments?
Yeah, I’m not very faithful to any particular environment; I pick & choose depending on what I’m doing and how I feel. I often end up recording improvisations or specific results of code I like in Tidal into a DAW and sometimes layer other things on top. Or I use Tidal to control synthesizers via MIDI.

Tidal Contributions

How do you contribute to Tidal Cycles? What have you worked on?
I have used Tidal in educational workshops and enjoy seeing how it excites people and inspires interest in music making more generally. I generally introduce/incorporate a variety of different approaches to music making when delivering workshops or teaching.

Music

What projects are you currently working on or planning? What's next?

  • Ultraniche label is releasing one of my Kar Pouzi singles, Clippity Clop, in 2023. This track was originally an improvisation I did in a live set with Tidal, made with one electronic stab sample. I then revisited the code, recorded the output & played some sax on top, in unison with the resulting pattern generated in Tidal.
  • I’m slowly working towards a solo Kar Pouzi release in 2024, including tracks made using a variety of tools, including Tidal amongst other things.
  • I’m also writing a piece for 2 percussionists and touring an audiovisual collaboration in Japan with artist Noriko Okaku.
  • I've been playing in a new, very quiet duo with percussionist Charlie Collins, which we're excited to perform & record soon.

Background

I work with a mixture of approaches and tools in my music, sometimes improvising from scratch (with saxophone, synthesizers, or Tidal), sometimes composing things from start to finish (be it through a DAW or a score, sometimes incorporating Tidal in electronic works), or using pattern games and scenarios with ensembles which are somewhere in between.

Helen with Unicorn

Tidal musicianRamon Casamajó
akaQBRNTHSS
LocationBarcelona (Sp)
Album/ReleaseThe Magic Words Are Squeamish Ossifrage
GenreGlitch/Noise, Electronic, Experimental
AvailableInterworld Media - Bandcamp
Release date21/04/2023

Summary

My name is Ramon Casamajó - aka QBRNTHSS (pronounced “quebrantahuesos”, meaning “bearded vulture” in Spanish). QBRNTHSS is the alias I use for my solo works focused on electronics. This post covers the live coding, mixing and recording process I used in my album, recently released through Interworld Media on Bandcamp.

The album title - The Magic Words Are Squeamish Ossifrage is the plain text solution to many cryptographic challenges, a tradition originated in a challenge set by the authors of the RSA encryption algorithm in 1977. It is my first full-length album as QBRNTHSS, the result of more than a year of live performances and rehearsals using Tidal Cycles and Supercollider as main instruments. It’s published on the Sheffield label Interworld Media in digital download and cassette tape, and aesthetically it’s a mixture of synthetic textures, noisy ambients and broken rhythms.

I’m going to explain the recording process used for the whole album - except one track that was recorded previously without live coding, but I feel it fits in perfectly...I bet you can't guess what track it is :-)

Hardware and software used

The album was recorded and mixed in different locations with this hardware and software equipment:

  • Lenovo ThinkPad T14 with Manjaro Linux
  • Focusrite Scarlett 2i4
  • Effects pedals: Boss DD-7 digital delay, TC Electronic Hall of Fame 2, Meris Ottobit Jr., Boss Metal Zone 2
  • Korg Nanokontrol 2 midi controller
  • Cadence (JACK)
  • Carla plugin host
  • VST synthesizers: Odin 2, Helm, Yoshimi
  • Supercollider
  • Tidal Cycles
  • Ardour (DAW)

As sound sources I used some samples that I’ve been collecting for a while (specially the percussion ones), some other samples that I have recorded by myself, and Supercollider synths made by the community and a few ones by myself.

After the recording and mixing process the album was mastered by Alfonso EVEL at EVEL Records.

Recording process

The record is the culmination of about a year performing and rehearsing. At some point I had a bunch of good ideas (at least that’s my impression), and the motivation to make a new album. But I didn't want to just record what I was doing live, my goal wasn’t to document my live practice. I wanted to do an album that was interesting and enjoyable for itself, an album that I would buy myself and listen to at home.

From the beginning my conception of the album was to be a collection of short or concrete sound passages, the previous ideas went in this direction too. I didn’t want to record long soundscapes that evolve slowly over many minutes, which I love too, but that wasn't the point here.

Also some time before the recording I had started to use some effects pedals to process the sound and make the live performances more dynamic and fun, so I wanted to use them on the album.

I decided to record on multiple tracks on the DAW (Ardour), and in more than one take when it was necessary. That allows me to:

  • Polish the mix in the daw.
  • Apply more controlled dynamic changes from Tidal Cycles than if I had to record in one single take. I could focus on some parts of the song one after another.
  • Process some parts separately with the effects pedals afterwards.

That said, I didn’t record every sound in a separate track, just what I needed to let me construct the song comfortably. On the other hand I didn’t do overdubs once a track was recorded, only little edits sometimes.

So basically the record process for a song went like this:

  • Play and record the different tracks from Tidal Cycles to Ardour.
  • In Ardour adjust the mix and do some edits if necessary.
  • With an effects loop record again some tracks through the effects pedals, applying as I like it.
  • Finalize the mix with the final touches: adjusting volumes, final edits (some fade-ins or fade-outs, cutting some starting or ending parts, etc), and do some stereo panning in some tracks.

Code

As an example, here is the code and DAW screenshots for the second song on the album, entitled Bone:

setcps (60/60)

-- sustain loop
d1
$ trigger 1
$ s "snoisefb*5" # n "<b5'min7>"
# voice 1
# sustain (rangex 0.025 0.9 $ slow 100 $ tri)
# lock 1 # delay 0.2 # delayt 0.1 # delayfb 0.2
# accelerate 1 # speed 3
# pitch1 (range 0.02 0.1 $ slow 27 $ sine)
# resonance 2.5 # gain 0.75
# octer 1

do
let pats =
[
("pl", s "HIHATS:6*4" # n ((irand 5)+10) # sustain 0.5),
("cr", s "KORGER1*4" # n ((irand 4)+29) # sustain 0.1),
("cl", s "~ claps ~ claps ~" # n ((irand 5)+2)),
("bb", s "[BASEDRUMS:22*4, BASEDRUMS:41*4]"),
("bs", s "BASEDRUMS" # gain 0.96 # n (choose [9,14,17,19,29,33])),
("sl", s "~")
]
d2 $ fast 2 $ ur 6 "[{pl} sl bb]" pats[]
d1
$ stb 0.3 (fast 2)
$ s "snoisefb*5" # n "<b5'min7>"
# voice 1 # sustain 0.025
# lock 1 # dly 0.2 0.1 0.2
# accelerate 1 # speed 3
# pitch1 (rgs 0.01 0.1 12)
# resonance 2.5 # gain 0.75
# octer 1

d3
$ trigger 3
$ slow 11
$ s "wndelayfb" # n "c"
# gain 0.9

xfadeIn 4 30
$ slow 10
$ off 0.01 (# fshift ((cF 0 "23")*220))
$ stb 0.3 (stutter 3 (1/32))
$ degradeBy 0.4
$ stb 0.4 (jux rev)
$ n (scramble 3 (arpg "<a5'min7>")) -- ff5'min9 d6'sus4
# s "sawdelayfb"
-- # pan rand
# sustain 5 # gain 0.9 # orbit 3

d5
$ n "c3" # s "fu"
-- # octave ((irand 5)+3)
# reps (((cF 0 "21")*3)+2)
# ftime (cF 0 "22")
# pan (rgs 0 1 2)
# gain 0.9
# lpf 1250

xfadeIn 1 19 sil -- fb
xfadeIn 2 20 silence -- beats
d3 sil -- perc dly
xfadeIn 4 20 silence -- arpg
xfadeIn 5 20 silence

DAW - Ardour

The Ardour screenshot reflects the status after the first recording step. This is what I’ve recorded in every daw track:

  • d1 -> feedback synth
  • d2 -> perc
  • d3 -> delay perc
  • d4 -> synth arpg
  • d5 -> synth bloop
Ardour DAW view

Next, the Ardour screenshot shows the status after the effects loop recording step, where some edits and extra tracks with the effects pedals were applied:

  • I changed the beginning of the song discarding the first part of the “feedback synth” track (you can see the final track as “feedback perc” and the original “feedback_perc_ini” muted).
  • I used the Boss delay pedal to add some dub flavor to the original track "perc", resulting in “drums_dly” left and right (see the original track as “drums” muted).
  • I used the TC Electronic pedal to add some reverb to the original “delay perc” track, resulting in “perc_delay_fx” left and right (see the original track as “perc_dly” muted).
Ardour DAW view 2

So that’s it. Hope that this post is interesting and that you can listen to The Magic Words Are Squeamish Ossifrage. Working on it was a challenge that I have enjoyed a lot, and I love the results… I think that finally I will buy the album!

More info

For next gigs and more info you can follow me at:

Tidal CyclistEloi Isern
akaEloi el Bon Noi
LocationCentelles (Spain)
Years with Tidal4 yrs
Other LiveCoding envSonicPi, Hydra
Music available onlineYouTube, BandCamp
Code onlinehttps://github.com/eloielbonnoi
Other music/audio swAbleton Live, Audacity
CommentsClub Tidal Forum Thread
Eloi with logo

Livecoding

What do you like about livecoding in Tidal?
For me tidal cycles is a compositional tool because it allows me to make the complex music I've always dreamed of and do it very quickly, and more importantly, it allows me to perform it by myself in any circumstance. I'm particularly interested in the ability to create complex iterative structures and the flexibility it gives you to manipulate the sound. Sometimes, when I finish writing one of my endless scripts, I'll run it and spend some time listening to what Tidal comes up with. I'm fascinated by the code's ability to generate unexpected structures all the time.

What inspires you?
I am often inspired by the work of other composers and live coders. I'm always looking for sessions on YouTube of artists I'm interested in. I don't have a programming background so I often design my processes starting from those of other colleagues. In terms of genres, I spent a few years listening to a lot of 20th and 21st century contemporary music, but now I'm quite interested in the experimental electronic music scene. Lately I've been listening to a lot of glitch music that I discover on Bandcamp. I love browsing Bandcamp.

How do you approach your livecoding sessions?
I am currently presenting a series of short pieces, the "Rumble machines", which is basically a catalogue of algorithmic processes for generating sound and modeling it on the fly. It's a show designed to be listened to in good conditions, but it is not oriented to the dance floor. I'm working on the possibilities of a script that allows me to mix pieces from other artists and manipulate them with the typical Tidal Cycles processes because I want to be able to offer a rave show.

What functions and coding approaches do you like to use?
I'm super fan of the slice function. It works well with quantized loops. I use to modify the inner pattern on the fly. Starting here...

d1
$ slice 8 "0 1 2 3 4 5 6 7"
$ s "yourLoop"

...and ending somewhere close to this

d1
$ slice 8 "<0 [ 3 4]> 1!2 3*[4|8|2] [2 4 6] 5 <~ 6> <7 2>*[1|8|12]"
$ s "yourloop"

Thanks to live coders like Hiroki Matsui I've rediscovered the spread($) function. I learned a lot from his work.

do
setcps (90/60/4)
d1
$ fast 2
$ stack [
spread ($) [id, rev, (|+| accelerate "1 2"), (|+| coarse "16 32 24"), chop 16, stut 4 0.25 0.05 ] $
cat [
(sound "amencutup*8" # n (irand 32)) |+ accelerate (2),
(sound "v*4" # n (irand 6)) |+| pan "[0 1]*4",
(sound "casio*8" # n (irand 6)),
(sound "ulgab*8" # n (irand 6)) |+| pan "[0 1]*4"
]|+| unit "c" |+| speed 8 # room 0.4,
whenmod 8 3 (const silence) $
stack [
midinote (slow 2 $ (run 16) * 10 + 60)
# s "supergong"# pan (slow 7 $ range 0.8 0.2 $ sine) ,
midinote (fast 1 $ (run 16) * 10 + 60)
# s "supermandolin" # pan (slow 7 $ range 0.2 0.8 $ sine)
]]
# decay "[1 0.2]/4"
# voice "[0.5 0]/8"
# sustain (slow 7 $ range 5 0.5 $ sine)
# room (range 0.4 0.9 $ slow 17 sine) # size(range 0.3 0.6 $ slow 17 sine)

Eloi w

How do you contribute to Tidal Cycles? What have you worked on?
I try to stay connected with the activities scheduled by the TopLap Barcelona community - attending our monthly from scratch sessions, being part of the festivals we program and giving Tidal workshops whenever I can. I'm very fortunate to belong to this community and I feel very close to them. I'm recently creating a live coding community in one of the universities in Barcelona. It is still an early project, but I hope that next year many students will join us.

I take this space to make a reflection: is the global live coding community getting old? In other words, are we managing to engage young people (post teenagers in their twenties)?

Do you use Tidal with other tools / environments?
Yes, I drive every orbit to a single track in Ableton Live adding compression, EQ and some limiter to each one. I also add some mastering patch to the main output.

Music

How has your music evolved since you have been livecoding?
Without Tidal Cycles I would not be able to produce my music or at least not as quickly. I try to think of my pieces as sound sculptures. Sound that moves and mutates structured by a chaotic order. I like the contrast between minimalist, almost pointillist fragments and noisy passages. Working with other musicians has always been conflicting for me for several reasons: the commitment, my questionable leadership skills... Discovering Tidal cycles has allowed me to make all the noise I wanted without needing anyone. This autonomy has then allowed me to collaborate with other artists in a much "healthier" way. Thank you Alex!

What projects are you currently working on or planning? What's next?

  • My live coding practice is mainly focused on the creation of new material to be published at the end of the year and to be able to do many concerts in 2024. Although Tidal is a tool that saves you a lot of time, I'm quite slow in composing and very demanding on myself. The preparation of the live shows takes me a lot of time.

  • I collaborate with Eloy Fernández Porta, a very interesting writer and thinker with whom we do spoken word sessions. Curiously we are both named Eloi, an unusual name.

  • I also have a project Noi$ with White Pèrill in which we make improvised electronic music from scratch. In our shows I use the screen to tell the biography of a composer with humor interspersed with code and writing.

  • In 2024 I will collaborate with a very interesting poetess and a flamenco singer. I will keep you posted. I am very excited!

Music / recorded livecoding sessions:

Comments: Club Tidal Forum Thread

Introduction

Hello, I'm Pierre Krafft, aka Zalastax, a software engineer and hobby-musician from Gothenburg, Sweden. I've been enjoying Tidal since late 2021 after first dabbling a bit with Orca and, way before that, Sonic Pi. I primarily use Tidal to control hardware synthesizers using MIDI. Tidal is a really neat sequencer and I think there's a lot of untapped potential which I hope to explore more in the future.

This post shares my experience of replacing a significant part of the Tidal internals. What I achieved is a direct integration from Tidal with Link, which is a library for synchronizing musical time between applications.

In this post, I'll explain why Link integration was important to me, provide an introduction to Tidal internals (with a focus on scheduling), some important concepts of Link, and how I overcame some really tough challenges!

The idea

I make music with friends who use traditional synth setups. To have our synths play in sync, we connect them over MIDI. But when I started using Tidal, setting up the MIDI clock was not so convenient and I was afraid of it crashing, which would stop the show for everyone. So I started looking for a way to have a stable MIDI clock and connect it with Tidal. Soon thereafter, I learned about Link.

The purpose of Link is to "synchronize musical beat, tempo, phase [...] across multiple applications running on one or more devices." Unlike traditional MIDI clock synchronization, which relies on a single device acting as the master clock and sending timing information to all connected devices, Link uses a peer-to-peer network protocol to allow all devices to communicate with each other and agree on a common tempo and beat phase. This allows for more accurate and stable synchronization between devices, even if the tempo changes or if devices are added or removed from the network. Additionally, Link provides a way to sync devices wirelessly, eliminating the need for physical connections between devices.

My idea was to have some application listen to the MIDI clock and use Link to sync Tidal with that MIDI clock. I learned later that Link is not meant to be used that way, but the idea got me started on integrating Tidal with Link...

I get started

November 2021 is the start of my journey for adding Link support to Tidal. I started discussing the path forward with Yaxu, Tidal's inventor, in two Github Issues (1, 2). Yaxu had already done some thinking about adding Link to Tidal and he had also done some exploration that I could learn from. His positive responses motivated me and brought me confidence that this was a pursuit worth taking!

One of the main challenges of integrating Link with Tidal was that Tidal is written in Haskell, while Link is a C++ library. I knew that C++ libraries can be exposed as C libraries, and Haskell can interact with C libraries through a mechanism called the "Foreign Function Interface" (FFI), but I had never done so before. Nonetheless, I set out to create a basic Link integration in Haskell and I fairly quickly had something that compiled. In the world of Haskell, this is often a huge success which means you can pack up - work's done! But in this case, work was far from done...

Challenges

Some parts of the Link library was working, but when calling the crucial code this->link.enable(true);, GHC (the Haskell compiler / interpreter) crashed.

Debugging internal GHC crashes is tricky for most people, so I made many twists and turns to find out what might be wrong. Several long nights were spent reinstalling Haskell and battling build system configuration. The full details are documented in the issue for Link support in Tidal, but the short story is that I found out that could avoid the crash by including Link as a shared Library. This workaround was not suitable for the final release of Link support, but it let me continue the work.

After making great progress on the Link integration, I became ready to start replacing the workaround. 6 weeks into my ambitious project, I was ready to report an issue to the Haskell maintainers. I reported that my program worked when using cabal v2-run but not cabal v2-repl. Since Tidal uses GHCi (the REPL), this problem was crucial to resolve.

Several GHC maintainers pitched in, offering suggestions and trying to reproduce my issue. Unfortunately, I could reproduce the issue, but the GHC maintainers were not as successful so interest faded.

I went ahead and reproduced the issue several times, but only on Windows - not Linux, and even got a friend to reproduce it on their machine. However, this did not immediately rekindle the interest of the maintainers.

I started digging deeper to identify the root cause. First by using WinDbg, but the call stacks and multiple threads were too convoluted for me to digest. So I resorted to print-debugging, working my way through the C++ code, adding printouts everywhere. Soon thereafter, I had my eureka moment! I isolated the issue to be caused by using C++ Exceptions! Even caught exceptions caused issues for GHCi, but not for the compiled executable.

I could now provide a minimal example and, one day later, a Haskell maintainer replied with a detailed analysis, which I quote here in full:

The RTS's Runtime linker doesn't support C++ exceptions on any platform in non-dynamic way. Historically we've never needed to as not many C++ code was being used. It works on Linux because it defaults to dynamic way, which gets the system loader to handle the heavy lifting.

On Windows we don't handle .xdata and .pdata sections, so once you get an exception the OS unwinder asks us if we can handle the exception and we say no and move on and the crash occurs. You don't see GHC's panic message because the dynamic code is created outside of GHC's VEH region.

If I instead build a .dll and make my FFI calls towards that .dll, the code does not crash in GHCi

Yes for the same reason as it works on Linux, the exception will be handled by the system unwinder.

Now supporting this on Windows these days is a lot easier than it used to be. GHC Already has a native exception handling in place for itself in the VEH handlers. and we've dropped support for x86. x86_64 uses exception tables but gives us an easy way to extend the exception tables for dynamic code like JITed code.

3 months later, the issue was fully fixed and ready to be included in GHC 9.4.2. This let me finally remove the workaround, use Link directly instead of as a shared library, and integrate my work to the Tidal repository. This bug in GHC is the reason Tidal 1.9 requires GHC 9.4.2 or later on Windows.

I'm very proud of my perseverence to resolve this issue. I started my attemps late November 2021 and merged the code to Tidal early July 2022.

The integration of Link with Tidal posed several challenges but the end result was a success. In this section, we provide an overview of the architecture of the Link and Tidal integration and discuss the design choices made along the way. This information can serve as a guide for those who wish to create their own Link integration in different projects.

Tidal Innards

Let's start by exploring some Tidal Innards. For a more complete reference, please refer to What is a pattern?.

Some important concepts in Tidal innards are Arc, Part and Event:

-- | A time arc (start and end)
type Arc = (Time, Time)

-- Tidal often needs to represent a Part of an Arc.
-- It does so with two arcs, the first representing the whole of the part,
-- and the second the part itself.
-- Often both arcs will be the same,
-- which simply means that we have a whole that has a single part.
--
-- | The second arc (the part) should be equal to or fit inside the
-- first one (the whole that it's a part of).
type Part = (Arc, Arc)

-- | An event is a value that's active during a timespan
type Event a = (Part, a)

Tidal processes musical patterns by querying for all Events within an Arc. The Events returned by the query are distributed to targets such as SuperCollider. These details remained unchanged when moving to Link as the base for the scheduler.

The Link API is responsible for converting between beats/cycles on a shared timeline and a clock that corresponds to when the sound should play from the speaker. The concept is visualized in a timeline diagram below. Two Link instances are shown. The top and bottom of the diagram show how the two instances have their own beat counter. However, the beats have a shared phase - they align over bar or loop boundaries. I created the diagram below with alignment every 8 bars.

link timeline diagram

The API between Tempo.hs and Stream.hs hides how Link is called. This helps separate concerns but could also enable alternative time keeping mechanisms. It should not be too difficult to implement the API using the local system clock and memory to keep track of a local timeline. Please reach out if you would like to create such an implementation! Doing so could open the door for adding back other synchronization mechanisms again :)

The API between Tempo.hs and Stream.hs includes the following operations:

data LinkOperations = LinkOperations {
timeAtBeat :: Link.Beat -> IO Link.Micros,
timeToCycles :: Link.Micros -> IO P.Time,
getTempo :: IO Link.BPM,
setTempo :: Link.BPM -> Link.Micros -> IO (),
linkToOscTime :: Link.Micros -> O.Time,
beatToCycles :: CDouble -> CDouble,
cyclesToBeat :: CDouble -> CDouble
}

As mentioned in Troubles, Link is a C++ library and the Haskell integration is done using the "Foreign Function Interface" (FFI). Haskell has some support for integrating directly with C++, but it seemed too difficult to use for my taste.

Fortunately, while I was working on my implementation, Link released a C-wrapper of their library. Integrating with C-libraries from Haskell is fairly easy, and mostly comes down to setting up the compiler correctly and defining the C-functions in a .hsc-file.

Conversion is straight forward:

-- Haskell
data AbletonLinkImpl
data SessionStateImpl

newtype AbletonLink = AbletonLink (Ptr AbletonLinkImpl)
newtype SessionState = SessionState (Ptr SessionStateImpl)

foreign import ccall "abl_link.h abl_link_commit_app_session_state"
commitAppSessionState :: AbletonLink -> SessionState -> IO ()
// C

typedef struct abl_link
{
void *impl;
} abl_link;

typedef struct abl_link_session_state
{
void *impl;
} abl_link_session_state;

void abl_link_commit_app_session_state(
abl_link link, abl_link_session_state session_state);

Ticks and Processing Ahead

Tidal needs to process events a few hundred milliseconds early so that the event can reach the sound engine/synthesizer in time. Otherwise, the event would play late from the speaker, and we would not be synchronized with others in the same Link session. The processing ahead is configured via cProcessAhead.

The scheduler is based on "logical time" that uses a tick based system. This means that the implementation keeps track of the starting time and the length of each "tick" in order to step time forward in equal chunks. To turn the tick number into a "logical time", the following formula is used:

logicalTime startTime ticks' = startTime + ticks' * frameTimespan

Working with logical time / ticks is a common approach to avoid time drifts which I kept from the original scheduler. I'm not sure how much difference it still makes now that Link does the heavy lifting, but it felt safest to keep it.

Putting it together

With the different components explained, I can now explain the whole:

  • Tidal processes events ahead of time by querying for events within an Arc that has not happened yet (based on the tick system).
  • Processing events ahead of time is common to all Link based systems since it's the only way to not play the sound too late due to the Link API being based on when the sound should play from the speaker.
  • The translation from cycles in Tidal to a timestamp is performed by the Link API.

The picture below shows the relation between Link, the logical clock, and the current time. The current time is greater than the logical time of Tick 24, which means that we should be processing all Events that happen between the Arc (Tick 24, Tick 25). We query for all Events within this Arc and convert the start and end cycle of each Event to a clock time by using the Link API. As mentioned earlier, the events that we currently query for should all happen in the future. This is why the mapping from Logical clock to Link instance time is a diagonal arrow that goes forward in time.

logical clock

A note about multithreading

The scheduler runs in a separate thread, so Tidal is multithreaded. This follows the approach used by the previous scheduler and ensures that the GHCi REPL keeps being responsive.

The original design used several MVars to copy data between processes. MVars are a concept from concurrent Haskell. They act as synchronising variables and are used for communication between concurrent threads.

However, the design based on several MVars made the code difficult to follow and hard to verify for correctness. In the new design, we communicate between threads using a list of actions, similar to dispatching Actions to Redux in JavaScript or calling an actor in Erlang. This puts all the tricky logic that deals with the internal state in a single place. Following this approach makes the code much more easy to reason about and is why I like Erlang so much ;)

The list of actions is communicated using an MVar [TempoAction]. The definition of TempoAction is as follows:

data TempoAction =
SetCycle P.Time
| SingleTick P.ControlSignal
| SetNudge Double
| StreamReplace ID P.ControlSignal
| Transition Bool TransitionMapper ID P.ControlSignal

Each action can thus be handled in sequence, making the logic easy to reason about.

Final words

Contributing to Tidal was (and continues to be) a very fun experience! The community is very nice and supportive and I enjoy working in the codebase.

It was surprising to see that I appear to be the first person to integrate between Haskell and C++ on Windows. At least I am the first to report an error instead of giving up. I mean, the error I stumbled upon would have been found in most efforts to use C++ from Haskell because exceptions are very common in C++.

Once I could avoid the bugs in GHC, reworking the Tidal internals was quite straight forward. Even though I ripped most of the scheduler apart, the Haskell type system guided me through the refactoring. The next goal was always visible and I had direction for what next step to take.

Finally, I want to thank my girlfriend Moa for supporting me through this project and for listening to me explaining my ups and downs. The details must have been inpenetrable to understand, but she still listened and shared my joy or despair. For that, and incountable other things: Moa, I love you!

To the rest of the Tidal community, you're awesome too, and I'm happy to be a part of your world!

References

Keeping it on the rails

About Me

Hello, I am ghostchamb3r and this is my Masterclass. 👻

For lack of a better term I produce electronic dance music. I take influences from a variety of genres and as a result I use a variety of production techniques and sound design approaches. When I go to upload my music to Spotify I just tell Spotify that it's Hard Techno, because sure, why not? I'm pretty sure it's not actually hard Techno or any of the other genres from the pulldown list. I don't know what it is. What I do know is that it's danceable and it hits hard on a club room floor.

I produce in Ableton Live and I perform in Tidal Cycles.

I spent a considerable amount of time deciding on how I was going to go about performing my music. At the end of the day turntables were not for me. I love Tidal Cycles and what it lets you do with samples. I love how portable it makes performing electronic music. All of my sets are just a laptop, an audio interface, and two 1/4 inch jack outputs to the house system. It doesn't get any more complicated than that and it makes the sound techs lives easier. Getting a projector set up to share my screen is probably the only troublesome thing I encounter but a lot of times the house has a projector and it's not a huge issue. I looked into other live coding environments as well and tried Sonic Pi for a short while. Ultimately I just like Tidal better than the others and that's purely my preference.

I work with prepared code for my live sets and I do so for a number of reasons. Ultimately I'm not someone who uses live code to realize my ideas, or rather I don't use live code out of the gate as a compositional tool.

I'm comfortable with a certain set of tools (probably too comfortable) and how they support my creative process. My process starts in Ableton. I consider the slick, polished studio version of a piece and it's live code counterpart as two different entities. One exists as a fine-tuned, finessed experience intended for listening as a fixed-media representation of a musical idea. The other is a live interpretation of that with room for improvisation and deviation, so that each live performance is a unique experience. Maybe that approach riles some feathers, I can't say for sure, but it's fun for me and doesn't hurt anyone and that's all I care about.

Preparation of Code

I like to think of preparation of code more as a consideration for the audience than as something intended to help me perform. There's a lot of cool things that can happen if you just improvise freely but even if people can see my screen, ultimately what I'm experiencing is far different than the audience in an improvisational setting because ultimately I'm making decisions and the audience is not. They might experience anticipation and their response might influence my performance but in the end they don't have any sort of direct control over the music and so it's important to me to consider their experience when I perform.

For me, that means having the skeleton of a piece prepared in advance so that I have room to improvise and react to the audience's response while still being able to move from one section of a piece to another and maintain a sense of movement or progression through musical ideas to keep the audience engaged. A piece doesn't constantly need to be grabbing the listener's attention, in my opinion, but I want to maintain a flow between getting lost in the music and having it bring the listener into the present moment.

Skeleton

Here's a track I made, titled Birth Machine, in Ableton.

Birth Machine DAW track

I personally love the work of HR Giger and have since I was very young. The piece was a reflection on the Giger painting of the same name. I don't agree with some of Giger's ideas in regards to that particular piece of art, but I think it's still a really solid piece of artwork.

Birth Machine - 2 Versions

Polished studio versionwildly unhinged live coded version
Birth Machine: tr 2 of NECRONOMBirth Machine: Live at Ice House MPLS

Here's the signal chain for the main synth track:

Birth Machine signal chain
  • I have Serum piped out to LFO Tool to smooth out some of the rumbliness of the overall signal
  • Compression a'la FabFilter-C with aggressive limiting a'la FabFilter-L
  • Sidechain compression to attenuate the signal when one of the other synths come in
  • An FFT spectrum analyzer set to a K-14 meter to monitor the signal loudness see the frequency spectrum
  • Automation on the cutoff frequency of the low pass filter inside serum's FX chain.

Over the course of a few bars the cutoff opens up more and more and lets the signal through, giving it a nice smooth fade in as the track plays through it's opening lines. I've also automated some pitch bending on the mod wheel to give it a sense of organic movement in the signal that gives it a nice sense of liveness, for lack of a better term.

Serum patch - Bhad Bhabie for Birth Machine

Birth Machine Serum patch
  • The patch is tuned to a Maqam tuning using a tun file loaded into the global settings. The tun file was generated from Scala. I started getting board with 12 TET a while ago and Scala has been a really fun way for me to get excited about writing synth lines again.
  • The piece consists of 8 tracks and the sections are evenly written into 1-bar, 2-bar, or 4-bar phrases.
    • 1 drum rack
    • 5 instances of Serum
    • 1 instance of Cthulhu
    • a track for some sample files
    • a mastering chain on the Master track

Performing Birth Machine in TidalCycles

I just start by rendering samples from each section (in 1-bar, 2-bar, or 4-bar durations depending on their length) and saving them in my Dirt Samples folder each in their own sub folder within the main Dirt Samples folder.I also put a lot of consideration into the way I name my sample files. I usually include the track name or an acronym/abbreviation of it and a description of the sound.

With my brain anxiously trying to keep all the channels sorted while people are watching me, a phrase that describes the sample sound just tends to be much more useful because I will 100% forget what "BM_MainSynth" sounds like or does in the whole mix while "BMGrind" is instantly identifiable and my brain says "oh that's the thing that's making the grindy sound, okay that's the one I need to adjust right now." Or if I'm hearing something coming from a channel that needs to be turned off I can listen and think "okay there's something making a scattering sort of sound" and then I look through my channels and voila, there's "BMScatter".

Drums the "Whole-Chunk" way

If you're the kind of person who starts in the DAW and moves everything over to code afterwards, there are two approaches to performing the drum tracks.

  1. Render the drum samples individually, save them to the Dirt Samples folder and write patterns using those samples, like:

    d1 $ s "Kick [Kick, Snare] Kick [Kick, Snare]"

That can work but there can be drawbacks:

  • Velocity information from MIDI drum racks might be lost
  • Anything very slightly off the beat grid will be lost
  • Drum sections made with sequencers can be a painstaking process to recreate, depending on the specific sequencer and pattern used
  1. Whole-chunk: render the drum track as one full sample, then transform it
    This is what I did for Birth Machine. So even though the opening drum section is just a quarter kick on the first beat and nothing else - I can use splice, fast, and randslice commands to transform that into something else and seamlessly transform it back into a simple quarter kick on the first beat just by changing or erasing the opening segment of that one line of code. Or just copy and paste the change into a new line so you can easily go back to something you thought was pretty neat without the hassle of remembering it and rewriting it.
d1 $ s "BMDrums:1" # gain 1.1
d1 $ splice 8 " 1 3 4 2 8 7 7 6" $ s "BMDrums:1" # gain 1.1

You can also do really neat stuff with rhythms when you approach them the "whole-chunk" way. This section:

d1 $ s "BMDrums:2" # gain 1.1

was 2 bars of drum patterns which could have been coded as:

d1 $ s "[Kick, OpenHat:1] [Kick, Snare, OpenHat:2] [Kick, OpenHat:1] [Kick, Snare, OpenHat:2]"

or

d1 $ s "[[Kick, OpenHat:1] [Kick, Snare, OpenHat:2]]*2"

followed by

d2 $ s "[closed:1, closed:2, closed:1, closed:2]*4"

but, in my opinion, it's fun and easier to just write:

d1 $ s "BMDrums:1"

and then to radically transform by simply adding:

d1 $ splice 8 " 1 3 4 2 8 7 7 6" $ s "BMDrums:1" # gain 1.1

I also like to use additional FX onto my drum sections in Ableton. If you slap a reverb from Tidal onto your drum channel then you'll get what you'd expect, a drum section with some reverb. But if you render a sample of a drum section that already has a reverb baked into it or maybe some reverb and additional FX and then do something like:

d1 $ splice 8 " 1 3 4 2 8 7 7 6" $ s "BMDrums:1" # gain 1.1

Then you're suddenly going to get not only the drum samples themselves chopped and rearranged around but also the pre-rendered reverb, delay, or distortion you baked into the sample and sometimes it can sound really cool, depending on the pattern of the chop you programmed into Tidal.

For me the whole-chunk approach leads to some really next-level drum patterns that I've found tend to get an extremely positive response from audiences. People in general are accustomed to a kick drum hitting very regular beats. A kick drum that flies all over the place in a pattern so wild it almost feels random is something very raw that alerts people's senses and it's something I use in a lot of my tracks to build up to different sections.

With the live performance of Birth Machine you start with a very regular kick drum beat that quickly starts flying all over the place and once enough synth layers have built up everything releases and drops back to a very regular quarter kick beat. The effect is something similar to a drop or build up in EDM but it's uniquely a live coding sort of technique.

I like to do the same thing with synth lines. In Birth Machine you have a very predictable sort of synth line that, once the track enters its second A section, suddenly changes to feel more synced to the beat but in an erratic way. It's unexpected and when triggered at the right moment the audience responds to it very positively.

Birth Machine code

Birth Machine: full code I start with for performance
setcps (144/60/4)
d1 $ s "BMDrums:1" # gain 1.1
d1 $ splice 8 " 1 3 4 2 8 7 7 6" $ s "BMDrums:1" # gain 1.1

d1 $ fast 16 $ randslice 8 $ s "BMDrums:1" # gain 1.1

d2 $ fast 4 $ randslice 4 $ s " BMGrind:1" # shape 0.2 # lpf 3200 # gain 1.2
d2 $ fast 16 $ randslice 8 $ s " BMGrind:2" # shape 0.2
d2 $ splice 8 " 1 3 4 2 8 7 7 6" $ s " BMGrind:2" # shape 0.2

d2 $ slice 16 "16 15 14 13 12 1 2 3 4 15 14 13 12 5 6 7 8 11 10 9" $ s "BMGrind:1" # shape 0.4
d3 $ slow 2 $ s "BMPulse" # delay 0.4 # delayfb 0.5 # delaytime 0.4 # lpf 2400

d3 $ slow 2 $ striateBy 16 (1/8) $ jux rev $ s "BMPulse" # lpf 700 # delay 0.4 # delayfb 0.5 # delaytime 0.4

d4 $ slow 2 $ s "[BMChop:2 BMChop:2 BMChop:2 BMChop:2] [~~~~]" # delay 0.8 # delayfb 0.6 # delaytime 1.4
d4 $ slow 2 $ s "[BMChop:2 ~ ~ ~] [~~~~]" # delay 0.8 # delayfb 0.6 # delaytime 1.4
d4 $ slow 2 $ jux rev $ s "[BMChop:2 ~ ~ ~] [~~~~]" # delay 0.8 # delayfb 0.6 # delaytime 1.4

d4 $ slow 2 $ s "[[BMChop:2 BMChop:2] ~ [BMChop:2 BMChop:2] ~] [~~~~]" # delay 0.8 # delayfb 0.6 # delaytime 1.4

d5 $ striateBy 16 (1/8) $ s "BMScatter:2"
d5 $ splice 8 " 6 6 6 5 5 5 3 2" $ striateBy 16 (1/8) $ jux rev $ s "BMScatter:2"
d5 $ chew 8 "7 6 5 1 " $ striateBy 8 (1/8) $ jux rev $ s "BMScatter:2"

d5 silence

d1 $ s "BMChop:1" # gain 1.1

d3 $ slow 2 $ splice 8 " 6 8 7 5 3 3 2 1" $ s "BMPulse" # delay 0.4 # delayfb 0.5 # delaytime 0.4
do
d1 $ s "BMDrums:2" # gain 1.1
d2 $ s "BMGrind:1"
d3 silence
d4 silence
d5 silence
d6 silence

d2 $ striateBy 4 (1/4) $ s "BMGrind:2" # shape 0.6

d1 $ s "BMChop:1" # gain 1.1

d1 $ splice 8 "1 4 1 3 2 6 1 7" $ s "BMDrums:1" # gain 1.1
d2 $ splice 8 "[2*8 4 16 2 7 32 16 8]" $ jux rev $ s "BMGrind:2" # shape 0.3
d3 $ slow 2 $ slice 8 "8 8 8 ~ ~ 2 2 1" $ s "BMPulse" # delay 0.4 # delayfb 0.5 # delaytime 0.4

d3 $ slow 2 $ slice 8 "8 8 8 ~ ~ 2 2 1" $ jux rev $ s "BMPulse" # delay 0.4 # delayfb 0.5 # delaytime 0.4

d4 $ slow 2 $ s "[BMChop:2 BMChop:2 BMChop:2 BMChop:2] [~~~~]" # delay 0.8 # delayfb 0.6 # delaytime 1.4
d5 $ striateBy 16 (1/8) $ s "BMScatter:4"
d5 $ splice 16 " 3 2 16 15 14 12 11 8 7 6 5 3 1" $ striateBy 16 (1/8) $ jux rev $ s "BMScatter:4"

d6 $ s "BMGrind:1" # gain 1.1
d4 $ fast 8 $ randslice 8 $ jux rev $ s "BMChop:2" # delay 0.8 # delayfb 0.6 # delaytime 1.4
do
d1 $ s "BMDrums:2*8" # gain 1.1
d2 $ s "BMGrind:3*4"
d3 silence
d4 silence
d5 silence
d6 silence

d2 silence
d3 silence
d4 silence
d5 silence

I also like to keep an entire set in one file with comment breaks for each piece. I also keep all channels silence commands saved somewhere either at the beginning or very end of the whole document so I can always jump to the command I need if things go too far off the rails. Depending on the pieces I have planned to perform or how much improvising I plan to do I try to keep at least 10 channels ready to silence but sometimes as many as 20.

Code saving strategy

With Birth Machine I have some changes to the studio/vanilla version prepared ahead of time. I don't have the original code saved either, so whatever I forget to change back at the end of a performance remains in the code, so it sort of permanently changes and mutates with every performance. On other pieces I keep both a 1) slightly modded version and 2) a very vanilla version that is completely faithful to the studio version so that I can do entirely new and different things to it live spontaneously.

Sapphica

I did something similar with a piece commissioned by the Minnesote Opera, titled Sapphica.

YouTube PerformanceRemixed version on Bandcamp
Sapphica: Minnesota OperaSapphica Redux
Sapphica code: vanilla version of Act 2
setcps (120/60/4)

d1 $ slow 5 $ s "Sapph2intro"
do
d1 $ sound "wolfkick [BehemothSnare, BehemothKick] wolfkick [BehemothSnare, BehemothKick, BehemothClap]"
d2 $ sound "[BehemothOpen BehemothClosed BehemothOpen BehemothClosed]*4"
d3 $ sound "[~] [BehemothMini BehemothMini ~ ~] [~] [BehemothMini BehemothMini ~ ~]"
d4 $ slow 3 $ sound "Sapph2Bass1:1" # gain 1.1

do
d4 $ slow 2 $ s "Sapph2Bass1:2" # gain 1.1
d5 $ slow 3 $ s "Sapph2rythym:1"
d6 $ slow 3 $ s "Sapph2rythym:2"

do
d4 $ slow 3 $ s "Sapph2inter:1"
d5 $ slow 3 $ s "Sapph2inter:2"
d6 $ slow 3 $ s "Sapph2vocalchop:1"
d7 $ slow 3 $ s "Sapph2vocalchop:2"

d7 silence

do
d1 silence
d2 silence
d3 silence
d4 $ slow 4 $ s "Sapph2trans"
d5 $ slow 4 $ s "Sapph2out:3"
d6 silence
d7 silence
d8 silence
d9 silence

d4 silence

do
d1 $ sound "wolfkick [BehemothSnare, BehemothKick] wolfkick [BehemothSnare, BehemothKick, BehemothClap]"
d2 $ sound "[BehemothOpen BehemothClosed BehemothOpen BehemothClosed]*4"
d3 $ sound "[~] [BehemothMini BehemothMini ~ ~] [~] [BehemothMini BehemothMini ~ ~]"
d4 silence
d5 silence
d6 $ slow 4 $ s "Sapph2out:1"
d7 $ slow 4 $ s "Sapph2out:2"

d1 silence
d2 silence
d3 silence
d4 silence
d5 silence
d6 silence
d7 silence
d8 silence
hush
Sapphica code: slightly modded version
setcps (120/60/4)

d1 $ slow 5 $ s "Sapph2intro"

do
d1 $ sound "wolfkick [BehemothSnare, BehemothKick] wolfkick [BehemothSnare, BehemothKick, BehemothClap]"
d2 $ sound "[BehemothOpen BehemothClosed BehemothOpen BehemothClosed]*4"
d3 $ sound "[~] [BehemothMini BehemothMini ~ ~] [~] [BehemothMini BehemothMini ~ ~]"
d4 $ slow 3 $ sound "Sapph2Bass1:1" # gain 1.1

d4 $ slow 3 $ striateBy 16 (1/4) $ rev $ s "Sapph2Bass1:1" # gain 1.1

do
d4 $ slow 2 $ s "Sapph2Bass1:2" # gain 1.1
d5 $ slow 3 $ s "Sapph2rythym:1"
d6 $ slow 3 $ s "Sapph2rythym:2"

do
d4 $ slow 3 $ s "Sapph2inter:1"
d5 $ slow 3 $ s "Sapph2inter:2"
d6 $ slow 3 $ s "Sapph2vocalchop:1"
d7 $ slow 3 $ s "Sapph2vocalchop:2"

do
d6 $ slow 3 $ rev $ striateBy 12 (1/4) $ s "Sapph2vocalchop:1"
d7 $ slow 3 $ striateBy 12 (1/2) $ s "Sapph2vocalchop:2"

d7 $ slow 3 $ rev $ slice 12 "12 11 10 9 4 5 6 7 1 2 3 8" $ s "Sapph2vocalchop:2"
d7 silence

do
d1 silence
d2 silence
d3 silence
d4 $ slow 4 $ s "Sapph2trans"
d5 $ slow 4 $ s "Sapph2out:3"
d6 silence
d7 silence
d8 silence
d9 silence

do
d4 silence
d5 $ slow 4 $ striateBy 16 (1/4) $ s "Sapph2out:3"

do
d1 $ sound "wolfkick [BehemothSnare, BehemothKick] wolfkick [BehemothSnare, BehemothKick, BehemothClap]"
d2 $ sound "[BehemothOpen BehemothClosed BehemothOpen BehemothClosed]*4"
d3 $ sound "[~] [BehemothMini BehemothMini ~ ~] [~] [BehemothMini BehemothMini ~ ~]"
d4 silence
d5 silence
d6 $ slow 4 $ s "Sapph2out:1"
d7 $ slow 4 $ striateBy 16 (1/4) $ s "Sapph2out:2"

hush

With the code that already has some variations:

  • I have changes that I know I like and can adjust the values.
  • I can also easily transition from things that work to things that I haven't tried before.

With the completely vanilla versions:

  • I have a structure that aligns with the studio version.
  • I can change and reinterpret in a much more improvised manner.

I usually choose one version of the code to commit to and then keep that in my file for that set.

SuperCollider template

I also keep a template for all my SuperCollider code. It contains all the code I would want ready on-the-fly to save time during a performance. I comment all the lines so that I know what does what. I find it helpful to have these things ready in one file. I'd rather have the code do what I expect while performing rather than have it send back an error because I made a typo and didn't capitalize something. If an error is going to happen I want it to be because I pushed the limit of the hardware or software, but that's just me.

SuperCollider setup and customizations
//To check what audio devices you have available.
ServerOptions.devices

//To boot the server on your ASIO device. You'll want to replace the Focusrite with your own device as it's referred to in the string array shown after running the code on line 10.
s.options.inDevice_("Focusrite USB ASIO").outDevice_("Focusrite U SB ASIO"); s.boot;

//Set the sample rate
s.options.sampleRate = 44100;

//Create 20 channels for 10 stereo channels
s.options.numOutputBusChannels = 20;

//Start superdirt and specify the numer of orbits or stereo channels
~dirt.start(57120, \[0, 2, 4, 6, 8, 10, 12, 14, 16, 18\]);

//Blocksize, change depending on your hardware and latency
s.options.blockSize = 128;
s.options.hardwareBufferSize = 128;

//Start superdirt
SuperDirt.start;

// In case you need to increase the memory allocated to supercollider
s.options.memSize = 3145728;
s.options.memSize = 8192*32;

//Kills the server and cuts all audio from supercollider
Server.killAll

//If you get latency issues you can set it here
s.latency = 0.05;

//To record your session
s.record;
s.stopRecording

//Set the orbits up for Tidal

~dirt.orbits[1].set(\fadeTime, 4);
~dirt.orbits[2].set(\fadeTime, 4);
~dirt.orbits[3].set(\fadeTime, 4);
~dirt.orbits[4].set(\fadeTime, 4);
~dirt.orbits[5].set(\fadeTime, 4);
~dirt.orbits[6].set(\fadeTime, 4);
~dirt.orbits[7].set(\fadeTime, 4);
~dirt.orbits[8].set(\fadeTime, 4);
~dirt.orbits[9].set(\fadeTime, 4);
~dirt.orbits[10].set(\fadeTime, 4);

//code for Sidechain compressor taken from https://github.com/musikinformatik/SuperDirt/blob/develop/hacks/filtering-dirt-output.scd
~bus = Bus.audio(s, numChannels:2); // assuming stereo, expand if needed

~dirt.orbits[0].outBus = ~bus; // play into that bus.

// make a side chain controlled by second orbit, affecting the first
(
Ndef(\x, {
var control = InBus.ar(~dirt.orbits[1].dryBus, 2).sum;
var dirt = InBus.ar(~bus, 2);
Compander.ar(dirt, control, thresh:0.006, slopeBelow:1, slopeAbove: 0.1, clampTime:0.05, relaxTime:0.1)
//dirt * (1 - (Amplitude.kr(control) > 0.007).poll.lag(0.01));
}).play;
)

/*
cleaning up when you're done (run the code below to release the sidechain):
*/
(
Ndef(\x).clear;
~busses.do { |x| x.free };
~dirt.orbits.do { |x| x.outBus = 0 };
);

// algorave mastering, roughly according to
// https://mccormick.cx/news/entries/heuristic-for-algorave-mastering
(
~busses = ~dirt.orbits.collect { |each|
var bus = Bus.audio(~dirt.server, ~dirt.numChannels);
each.outBus = bus;
bus
}
);

(
Ndef(\x, {
var level = 1;
var distortion = 3;
var reverbFeedback = 0.1;
var all = ~busses.collect { |each| InBus.ar(each, each.numChannels) };
var mix = all.sum { |x|
var d = { 0.01.rand } ! x.size;
DelayN.ar(x, d, d)
};
var loop = LocalIn.ar(~dirt.numChannels);
5.do { loop = AllpassL.ar(loop, 0.15, { ExpRand(0.03, 0.15) } ! 2, 3) };
mix = loop * reverbFeedback + mix;
mix = LeakDC.ar(mix);
LocalOut.ar(mix);
mix = Compander.ar(mix, mix, 0.3, slopeBelow:1, slopeAbove:0.5, clampTime:0.01, relaxTime:0.01);
mix = (mix * distortion).tanh * (level / distortion.max(1));
mix
}).play;
);

/*
cleaning up when you're done:
*/
(
Ndef(\x).clear;
~busses.do { |x| x.free };
~dirt.orbits.do { |x| x.outBus = 0 };
);

Closing

I don't think my approach is right for everyone. In fact, it might only be right for me and only me. The intent of this article was just to share my coding practice and open the discussion up further. If anyone found anything useful or inspiring in any capacity I think that would be wonderful.

This article also came about after some forum thread posts I made in response to Heavy Lifting's blog post: Working with samples the Heavy Lifting way. The discussion thread from her article is really interesting and I was inspired to respond with my own approach.

I think each coder's approach is going to be unique in some capacity and they're all valid. People change too and that's especially true with musicians, producers, and composers. The approach I take now might not be the one I take seven years from now or even between one performance to the next and I think there's room to move between approaches fluidly if it sparks your creativity and brings you joy. I would hope that we are all doing this to have fun, and ultimately we should do what is fun for us.


Comments: Share your thoughts and keep the discussion going via this Club Tidal forum thread.

Tidal CyclistLina Bautista
akaLinalab
LocationBarcelona
Years with Tidal10?
Code onlineGitHub
Other music/audio swMaxMSP, VCVRack, ...
HardwareAnalog Four, Modular synths
CommentsClub Tidal Forum Thread

I’m Lina, I’m a composer and live coder, I see myself as a long-distance runner in live coding. I’m not a fast learner, but I’ve always been enthusiastic to see and to analyse other live coders' techniques, especially in live sessions (it’s not the same to see a streamed or pre-recorded session, than to feel the space and share it with the live coder), that’s probably why I’ve organised many performances, workshops, Algoraves and projects around live coding and Tidal.

Livecoding

How do you approach your livecoding sessions?

If we talk about two main approaches of a live coding session: the fully pre-composed material on the one, and a from scratch/blank screen session on the other, I’m closer to the second.

It was great to read the Heavy Lifting post about this approach, it’s great to see I’m part of the blank screeners team :). Personally, I’m not able to craft the session the way I want if I have too many lines, I’ve tried, but I get lost in the code, I’m not able to dig deep into everything that’s happening. It’s probably because I have to control every note, every sound. Besides, I think there’s a special beauty in writing a single line of code, that goes beyond the musical aspect, it can be performative, almost poetic.

(… but don’t get me wrong, I totally admire live coders who are able to prepare everything in advance).

Most of the time (when I can carry them) I use synthesizers to perform, I know Tidal is the best with samples with all the incredible functions to manipulate them, but I come from the DIY modular synth scene, so I really enjoy analogue sounds. I currently use MIDI and sometimes audio signals to control my synths.

What functions and coding approaches do you like to use?

I guess what I like most about Tidal is the pattern structures and the possibility to create and modify algorithms on-the-fly, I’m a fan of mini-notation and creating complex polyrhythms and structures with just a simple line, like:

d1 $ s "[bd(<9 5 4>,16),can:4(7, 16, 14) ]" 

haha, not really my style, but you can get the idea…

Other functions that define my sessions are arpeggios and scales, I usually make changes between the notes, number of notes, modes of arpeggios and so on.

d1 $ fast 2 $ arp "up" $ n "e'min'6" # s "superchip" # octave 2

And combining all that, I try to reach things like this during a session:

d1 $ s "bd*4"
d2 $ fast 2 $ arp "up" $ n "e'min'<6 8>" # s "superchip" # octave 2
d3 $ s "superchip(<7 5 1>,12)" # n (scale "minor" "<0 2> .. <12 7 3>"|+ 4)

I’ve found a useful way of making transitions by transforming the rhythms between binary patterns to ternary and vice versa. It creates interesting polyrhythms and with different subdivisions I have a lot of performative options.

I'd like to be able to switch to something completely different more quickly sometimes, but I guess that’s the problem of not having written anything else… Or not being able to think fast enough to create something new…

Tidal Contributions

How do you contribute to Tidal Cycles? What have you worked on?

A few years ago we created (with Iván Paz, and thanks to many enthusiastic live coders) the Toplap Barcelona node, and since 2018, maybe before, we’ve being organising workshops, concerts, festivals, projects non-stop … we’re always planning exciting things around live coding.

What motivates you to work on Tidal?

I like the active community that is always changing, creating new functions and developing things, for example I’ve been dreaming for years to implementing functions to use CV (control voltage), and it seems it’s already there, so I’ll give this a try.

Music

Tell us about your livecoding music.

My music varies from melodic ideas to noisy/ambient textures. I enjoy making multichannel experimental sessions as well as dance sessions, and everything in between. Making people dance has been a challenge for me for years, but I think I’m finally getting there.

What projects are you currently working on or planning? What's next?

I’m not sure yet, but I have been working with new material lately and spending more time on live coding than with my other practices (I also have a band), so maybe it’s time to record something new, we’ll see…

Comments: Club Tidal Forum Thread

Tidal Cyclistdigital selves
LocationLondon, UK
Years with Tidal5ish yrs
Other LiveCoding envSupercollider, p5.js, hydra, marching.js,Max (and/or) pd
Music available onlineSoundCloud, Bandcamp
Code onlineGitHub
Other music/audio swAudacity, Renoise Tracker DAW @_@
Forum ThreadAutonomous Computer Music Tidal Forum Thread
digital selves

photo credit: Antonio Roberts

Livecoding

What do you like about livecoding in Tidal? What inspires you?

I think the main thing that I like about Tidal for me is working, transforming, shaping and shifting patterns, and listening to the changes in real time. I recently co-ran a workshop with Iván Paz, Alex McLean and Dave Griffiths in Sheffield and at Hangar in Barcelona (we did it remotely at the same time- thanks to On The Fly for having us :) ). We talked a lot about patterns in the context of other traditions, like weaving. To me, it's interesting to think about computer music in this way.

I'm also super inspired by everyone else who is contributing through making music, creating forums for discussion, or working hard to make it an inclusive space. The community has always been one of the best things about TidalCyles <3

How do you approach your livecoding sessions?

I feel like I have two "modes" when it comes to live coding- testing things out and performing things. They're not mutually exclusive though, and often I will test things live on stage, or perform things to nobody else but me.

What functions and coding approaches do you like to use?

I find it hard to have more than one or two functions in my head at the same time, and tend to go through phases when perfoming live of only using the same ones because they're the ones I remember under pressure.

Some of my favourites recently are using press and fshift on drum patterns:

(All of the samples I use are available to download here)

d1
$ rarely press
$ almostAlways (jux rev)
$ stack [
s "sfe-fx/4" # n (irand 4),
gain "1*8?" # s (choose ["idmhit2", "revkit"])
#n (irand 16) # speed "[0.75 0.5]/16"
]
# fshift (range 100 300 $ slow 16 $ sine)
# gain 1.124
# speed "[1, 1.02]"
# krush 3

I also wrote a piece for the posthumanist magazine recently, as they had an issue on "rhythms", where I tried to compose some prose text embedded with TidalCycles functions, and it re-ignited my interest in the use of the sew and stitch functions, which I think is a super cool way to add sonic variation to patterns. E.g.

d1
$ sew (iter 4 "1 0")
( n "0 .. 7" # sound "cps1")
(n "0 .. 7" # sound "cpu")
# orbit 2

and

d4
$ stitch (binary "<127 63>") (sound "hjdsynth:12") (sound "hjdsynth")
# cutoff (range 200 4000 $ slow 8 $ saw)
# resonance (range 0.1 0.2 $ slow 8 $ saw)
# note (choose [5,9,0, 12, 16,17, 19])
# room 0.89 # orbit 3

Using the binary pattern notation to calculate where the two melodic sounds counteract with each other is super fun!

Do you use Tidal with other tools / environments?

Tidal is super cool as it doesn't have to be used with Supercollider and it's been fun to work on how to pattern sources other than just samples or synthesisers.

I've had a go in the recent past at using it to try and program the sounds of an artificial voice. Alex and I worked on first using it to pattern the Pink Trombone vocal synthesis - if you've not heard it, worth checking out here - and then more recently working on creating a voice model using "Neural Audio Synthesis", with a tool called RAVE which has come out of research at IRCAM, and then live programming this artificial voice from Tidal.

We don't have any public facing documentation at the moment, but hoping to be able to share something more extensive on this soon 👀

Tidal Contributions

How do you contribute to Tidal Cycles? What have you worked on?

A little while ago now, I worked on creating an autonomous agent that created its own patterns of Tidal code. This was a fun project during the summer of 2020, which I wrote up a bit about on the old TidalCycles blog here. This was part of the Summer of Haskell project, which I would encourage anyone who wants to work on the Tidal development to be a part of!

I guess the other way I have contributed is through running workshops on TidalCycles, which I've done in the past but not so many recently. It's always a nice way to get more people engaged and the install part has become much easier in recent years :)

What motivates you to work on Tidal?

Being part of a friendly community and wanting to help make new and exciting ways for humans to interact with algorithms.

Also I want to help inspire other women to be a part of the process of developing software! If there are any women out there that would be interested but don't know where to start please reach out and I'd love to help in any way I can.

Music

Tell us about your livecoding music.

I would say my music is meant to be equal measures fun and playful but also serious and emotional. I like to tow this line in the sounds that I make, making people confused if they can dance to the music or not. Been super insipered by some other artists that do the same kind of thing, e.g. Aeoi, sv1, DJH, Asia otus, 5ubaruu & saves, +777000, sleepsang.

How has your music evolved since you have been livecoding?

I've learnt a lot about creating complexity in rhythms, how to elicit surprise in listeners by introducing random variations in both structure and timbre. I've learnt a lot about collaboration too from the people I've worked with since I started live coding! And from working with my machine partner sometimes too └[∵┌]

Also I find myself trying to recreate a lot of rhythms I hear into TidalCycles structures, which is a part of my brain I can't turn off now :S

What samples or instruments do you like to work with?

I basically pick up a lot of samples here and there that I like to work with. I think Lucy's recent post about this outlines a lot of the similarity with her practice in being a sample collector.

I have been using the Serum VST for some midi sounds recently too, as it's a nice tool to work with for shaping melodic sounds.

What projects are you currently working on or planning? What's next?

I'm having a bit of an unplanned creative hiatus at the moment due to a lot of work (have to finish a PhD at some point in the near future) but I've got a few bits that I was working on before that I'm hoping at some point can turn into another release.

Add your comments in the Club Tidal thread.

Thinking about approaches to from-scratch improvised live code performance.
(As I write this it's sort of turning out to be everything I think about Tidal!)

Intro

Hi, I'm Lucy, and I'm a live coder. In this blog post I'm going to be talking about some of my strategies for using samples and approaches to from-scratch or blank-screen live coded performance.

What is 'from-scratch' anyway?

Some things to bear in mind:

  • I didn't build my software, or my computer
  • I've listened to music before
  • I practice
  • I have 'ideas'
  • Why do we even care?

I dunno where the original idea came from that live coding performances should start with a blank screen. I thought it might be from the toplap manifesto or the generative manifesto, but I looked back through both of those and don't think they're really saying that.

At any rate, when I started live coding, and in the context I was in (Sheffield, 2015) it felt like blank-screen was the only way. It excited me (and continues to excite me) but it doesn't excite everyone. I feel (maybe wrongly?) that the emphasis on fully from-scratch performances can be a barrier for some people, and when I run workshops I always try to emphasize that while I start from a blank screen, it's not compulsory. But I do feel that the Algorave/live coding approach of starting with a blank screen, and embracing error is really exciting and necessary - without this forum for experimental risky performances I wouldn't be able to do what I do.

Lately it seems the blank-screeners have decreased in number and I see more and more pre-prepared performances. I'm often the only blank-screener at a gig.

Disclaimer: I'm not a die hard - I have used pre prepared code in performances, and particularly if I'm using MIDI I have a few snippets prepped. And I have pre-prepped code in supercollider, and I've done performances where all the code was written in advance, and I've recorded performances and edited them and played them live in Ableton (shh, don't tell the live code gods).

I guess what I'm trying to say is it doesn't really matter anyway, it's just something I personally enjoy doing that I find exhilarating, and that I want other people to enjoy, while also recognising that it can be a bit scary.

I think I said this before in my newsletter - but here is an anecdote I like to remember when I'm thinking about this stuff:

I mentioned in work that I needed to practice for a gig and my colleague said "if you make it all up, why do you need to practice?"

-- which is such a great question! What I need to practice is making it up and here's how I do it.

1. Choosing samples

While I often use (usually hardware) synths in my set, what drew me to Tidal in the first place (and what forms the core of my performance) is the seemingly limitless opportunities for sample manipulation.

Of course you have your drums, synths, loops, acapellas, whatever, but what I really like is incorporating non-musical sounds into my sets. My go-to resource for this is freesound.org.

\m/ blessed be the freesound contributors \m/

I'll search for whatever I'm thinking about (bells, bats, woodwork, helicopters, notifications etc etc), have a listen and download a batch of sounds - anything that catches my interest. At this stage I don't know if they'll work or not, but that's ok.

Some other favourite sources:

  • Blood Sport sample pack
  • Legowelt
  • samples obtained from YouTube etc, legally or otherwise*
  • Plundering the sample libraries of collaborators (particularly Graham Dunning's - sorry Graham)
  • Recording sounds on your phone (or fancier equipment if you have it)
  • Plundering friends' recordings for remix material (usually a good idea to ask first)

*Side note on my ethics for sampling: if the person is extremely rich I will steal their sounds. If they are not then I don't. I don't feel bad about it. You should make your own mind up about this though.

2. Editing

I usually do a bit of sample editing in Ableton or Reaper next - trimming off silences, roughly normalising volume, checking for loop-ability. I don't spend too long on this - tbh I probably should and it would make things sound better.

3. Experimentation

This is the bulk of how I prepare. I usually update/refresh my samples every few months, but I might reach back into the archives for some oldies too. I don't use many of the standard Tidal/SuperDirt sounds (although I used to use them almost exclusively). I do a bunch of experimenting with my new sounds, combining them with old favourites and using my favourite functions to come up with some sketches that sound good to me. This is a semi-mystical process and obviously very personal, but I find this to be extremely enjoyable and almost hypnotic sometimes.

My favourite functions

Over time I've come up with my 'favourite functions' - actually these haven't really changed very much from the ones I used in my early sets, which I chose by going through the entire Tidal documentation and trying everything - you can do this too! It's a bit tedious at times, but for me it really helped me get my head round how Tidal thinks.

I pull the new samples into Tidal, and try a few of my typical function combos to see how they feel.

Short sounds

I'll use the mininotation and some simple functions to play with rhythms.

  • {} - for polyrhythms
  • speed hurry
  • chop
  • density (aka fast/slow)

Patterns

I'll start playing around with putting some patterns/sequences together.

  • iter
  • jux
  • sometimes/often/every
  • chunk

Longer sounds

I'll use the following functions to test out loops and textures.

  • loopAt
  • slice/splice
  • chop/striate
  • randslice
  • legato

Effects

I'll try some simple effects to manipulate the sounds

  • vowel/hpf/lpf
  • shape

And honestly, those functions, plus a bit of randomness/continuous functions, make up 99% of what I do in performances. You can get so much complexity with just a very little bit of Tidal syntax! Having a limit on the functions and sounds I'm using, for me, really supports from-scratch improvisation! (I actually wrote about this before on the Tidal forum).

While I'm experimenting I'm not worrying too much about what it sounds like, or the timings, but I'm more looking for a feel, and thinking about how something might work in a set (my criteria: do I like it?). Often at this stage I will discard individual samples or whole groups of samples. I might go back and edit them, or I might go hunting for similar or complementary sounds. I can spend a few hours doing this, and usually when I'm in the zone I will break into sections that would be more like what I do live (which is essentially the same as the experimentation outlined above, but with more consideration to structure and timing).

Sketches

So this way I come up with some little sketches which sort of act as the inspiration for my set. They won't be exactly what I play live (although I might refer to them if I have a panic), but they give me an idea of the approaches I might use with each sample or set of samples.

All samples referenced below available here on google drive).

Sketch 1

setcps (137/60/4)

d1
$ chunk 4 (hurry "<2 0.5>")
$ slice 8 "7 6 5 4 3 2 1 0"
$ loopAt 2
$ sound "skel:8 skel:8"
# legato 1
# gain 1.2

d2
$ chunk 4 (# gain 0)
$ jux (iter 4)
$ sound "{kick kick kick kick, 9sd*3 ~ ~, ~ ~ 9hh 9hh*2 [9hh*2 9oh]}"

d3
$ sometimes (hurry "0.5 2")
$ chunk 4 (# speed (range 1 2 sine))
$ sound "vkb*8"
# speed "0.5"
# legato 0.5
# shape 0.8

Sketch 2

d4
$ every 2 (density 2)
$ slice 8 "0 <0 1 2 3>"
$ sound "bev:1 bev:2"
# legato "0.5 1"
# gain 1.2
# shape 0.2
# speed 2

d5
$ sometimes (hurry 2)
$ chop "[1,4]"
$ sound "9rs*16?"
# shape 0.4

d6
$ every 4 (density "8 1")
$ sound "vkl"
# speed (choose [1,1,1,4,7])

d7 $ sound "kick kick(3,8)"

Sketch 3

d1
$ striate 4
$ sound "emub*8"

d2
$ sound "{emud, emud*8}"
# n (irand 8)
# legato 1
# shape 0.4

d3
$ iter 4
$ chunk 4 (# speed (range 1 2 saw))
$ sound "emustab:1(<3 5 6>,8)"
# legato 1

d4
$ sound "emupiano"
# n (irand 4)
# size 0.4
# room 0.1
# cut 1

4. Choosing a palette

From my experiments above I choose a palette of sounds. I usually try to think about sounds in the following categories:

  • Drums/percussion
  • Bass
  • Lead
  • 'Weird'/texture

5. Performing the set!

Usually I don't practice a full set before the gig, but from my experiments I will have some ideas/sections that I want to go for. I used to always write myself a crib sheet but I've mainly stopped doing that now (although I often miss it - just laziness really!). Usually they look something like the below - prompts for a feel or a texture, or the names of specific samples.

  • percussive bit
  • skel (or name of another stand-out/central sample)
  • ambient synth bit
  • dense textures
  • degrade/breakdown
  • etc

One thing I struggle with is transitions. Tidal has some functionality with this but I've never got on with it. digital selves is amazing at this <3 - I need to work on it!

Anyway, despite all this preparation, on the day I might do something totally different anyway. While I have ideas, it never sounds the same as it did in practice (particularly given the particulars of an individual PA or venue environment), and if there's a sound or a texture that pops up in the live environment that I really like then I'll follow that idea and see where it goes. I also try to pitch things in line with the other performers on the night, or where I am on the bill. If it's a chill vibe then I tend not to go in hard with like 180bpm harsh noise (and vice versa).

EMERGENCY TIP: If in doubt stick a big fat 4:4 kick under everything and it will probs sound decent :)

It doesn't always go well! But I usually enjoy myself regardless. If I have a crash or like accidentally set the BPM to like 120000 then it always feels like a very authentic live coding set and I enjoy that. It can be hard sometimes if you're the only blank-screener and everyone else's set is super polished and yours is a bit of a shit show, but I have to remind myself that's part of the fun. I find from-scratch live coding performances to be genuinely exhilarating and one of the best things in my life! (phew...)

6. De-mystifying the blank screen

What I'm trying to say with all this - (and well done if you've made it this far) - is that while the from-scratch approach might seem super cool and gonzo, there is a degree of prep that goes into it that I really feel is a process anyone can follow if they want to get into performing in this way. I actually find it super freeing to plug my laptop into the PA and just see where the sound goes, and I think given the nature of Tidal this can be a very relaxing way to play, rather than starting with strong preconceived ideas about what you want something to sound like or how you might like the structure to be. For me there are better tools than Tidal for performing in that way.

I also find this approach to be a really beautiful way to develop my relationship with my computer - it's a wonderful tool that does so much for me, but it can also be a friend and musical collaborator - I learn so much from our performances together <3.

From scratch coding can also feel safer with a human collaborator - find a friend and use Troop, Estuary or Flok to jam together. When you don't have to do everything yourself it can be easier to find the space and confidence to improvise.

Have a go from the safety of your favorite spot and try to enjoy the process!

7. Final warning

Having said all the above - this approach does require a certain FU attitude!!!! I still can't believe that people actually want not only to watch me perform and to listen to my music but actually to write and talk and teach about it, when I'm doing all this for purely selfish and personal reasons! Of course it makes me so happy when people like my stuff, but honestly I would do it even if they didn't, and that's why I think the from-scratch approach works so well for me, it's pure expression and experimentation, with a good dose of on-stage adrenaline. I'm super grateful for all the friendships and experiences live coding has given me. TY!

And if anyone is still reading. . . If you want to check out more:

Comments

  • What do you think? Does this from-scratch process resonate? Do you have different ideas?
  • Add your Comments in the Club Tidal thread.

Tidal CyclistMel Laubscher
akadjmelan3 (dee-jay-muh-lun-dree)
LocationCape Town, South Africa
Years with Tidal3 yrs
Other LiveCoding envEstuary, SuperCollider
Music available onlineYouTube - djmelan3
Other music/audio swPure Data, Logic, ProTools and similar DAWs
CommentsClub Tidal Forum Thread

Livecoding

What do you like about livecoding in Tidal? What inspires you?

I love the community around live coding and TidalCycles. What inspires me is how welcoming the community is and how simple it is to become involved. If you're new to TidalCycles there's a large community keen to help. In terms of TidalCycles itself I really enjoy the interactive aspect of the language, something that traditional DAWs lack. Live coding allows me to express myself musically much faster than a DAW can offer. I also find it easier to make creative decisions with Tidal whereas using a DAW often leads to overthinking and never actually finishing any projects.

How do you approach your livecoding sessions?

I largely participate in collaborative work, in which the group I collaborate with will brainstorm and decide upon a variety of strategies to use when we're jamming together. In both solo and collaborative work, depending on the context, I'll take an improvisational approach and randomly select audio samples, functions or write patterns I'd like to use in combination with one another. This is mainly because I'd like to discover (and be surprised by) all kinds of musical possibilities that any combination of functions, samples and patterns can create in Tidal.

What functions and coding approaches do you like to use?

My approach is mostly improvisational/experimental, but recently I've been experimenting with longer form composition attempting to create more structured patterns - i.e. placing a few stack functions within a cat function or a few cat functions within a stack function and then proceeding to expand on these.

I also enjoy using a number of functions that control the loudness (e.g #gain (range 0.35 0.85 fast 12 sine)) and spatiality (pan) of the audio I work with within confines of stereo monitoring. To do this I combine pan and gain and place the audio at different areas within the stereo field. For example:

d3 $
-- slow 2 $
fast 2 $
sometimes (slow 2) $
almostAlways (#gain 0.65) $ s "[[x*2][~ x][x@2][x]]" #s "hh27"
#delay (choose[1/12,1/4,1/8])
#pan (fast 2 $ sine)
#gain 1.15

Do you use Tidal with other tools / environments?

I've mostly used MiniTidal in Estuary when collaborating simply because it's an easy-to-access platform, especially for non-programmers such as myself. When I work on my own I do experiments with SuperCollider and Tidal in VS code. I have some experience with Pure Data as well and it was actually through creating small patches in Pure Data that I became interested in using programming languages to solve musical problems.

Music

Tell us about your livecoding music.

Since 2020 I've been a co-collaborator of SuperContinent. We've performed together at various conferences, online events and even at an online meeting. Locally, I've worked alongside students in a small university ensemble where we performed in online environments as well. As with collaborative contexts, one has to be aware of others in the group at all times. I find this to be an exciting challenge, especially when my co-collaborators come from varying musical backgrounds. Using the predetermined strategies we improvise and live code our performances from scratch. When I do my own experiments the goal is to write pre-composed code that's ready to run and which will be adjusted throughout the performance to create as much variation as possible.

How has your music evolved since you have been livecoding?

I've experimented a lot through my use of the language and observed a lot through collaboration. Alongside learning from my collaborators, I taught myself how to code with Tidal by watching what everyone else did. I now find that I'm able to use Tidal as a tool to express ideas far clearer than I ever could with any other tool.

What samples or instruments do you like to work with?

I work with all kinds of samples. I don't limit myself to use particular samples, but when I am looking for a particular overall "sound" I'll usually pick samples that will fit with what I'm going for.

What projects are you currently working on or planning? What's next?

Currently, I have a series of upcoming talks hosted by the University of Cape Town's South African College of Music. In these I'll be demonstrating the technique of live coding as it is still very much a newer approach to performing music in South Africa. I'll also be performing solo for the first time ever as part of this demonstration. Subsequent talks in this series will cover some of the work I've done during collaborations, and I hope to meet new people who might take in interest in learning how to live code themselves.

Comments: Club Tidal Forum Thread

Tidal CyclistAtsushi Tadokoro
akatado, yoppa
LocationMaebashi Japan
Years with Tidal7 yrs
Other LiveCoding envSuperCollider, SonicPi, Hydra, Kodelife
Music available onlineSoundCluod, Vimeo
Code onlineGitHub
Other music/audio swAudacity, Pure Data, Ableton Live
CommentsClub Tidal Forum Thread

photo: Phont @phont1105 (ANGRM™)

Livecoding

What do you like about livecoding in Tidal? What inspires you?

What I like about live coding with TidalCycles is that I can improvise and change the pattern flexibly per-part basis (connections, d1, d2, d3). It also combines musical and coding ideas at a high level.

How do you approach your livecoding sessions?

In my case, I pre-code a rough flow in TidalCycles according to the time I need to perform. However, I leave as much room for improvisational changes and extensions to the code, making for improvisational and varied performances.

What functions and coding approaches do you like to use?

The function I currently use most often is the combination of scale and remainder operations to generate various phrases. For example, the following code is used.

d1
$ s "supersaw*16"
# sustain "0.1"
# note (scale "minPent" "{-12..0}%5")

If the scale (minPent) used is changed to something else, the impression of the melody changes drastically. It is like improvisation in modal jazz.

Furthermore, by using the left and right channels effectively and by adding filters, you can add more depth to the performance.

d1
$ s "supersaw*16"
# pan (rand)
# sustain "0.1"
# note (scale "indian" "{-12..[0, 5]}%[5, 7]")
# lpf (range 200 10000 $ slow 8 $ sine) # resonance "0.2"

More complex rhythmic swells can be generated by using functions such as "jux" and "rev" that create changes on the time axis.

d1
$ sometimesBy 0.3 (jux (iter 16))
$ sometimesBy 0.1 (rev)
$ s "supersaw*16"
# pan (rand)
# sustain "0.1"
# note (scale "indian" "{-12..[0, 5]}%[5, 7]")
# lpf (range 200 10000 $ slow 8 $ sine) # resonance "0.2"

Do you use Tidal with other tools / environments?

I use TidalCycles in combination with other applications that generate visuals for audiovisual performance. Initially I used openFrameworks, but recently I have been using TouchDesigner.

However, it is difficult for one person to do live coding for sound and visuals at the same time. So I am currently using a method where the results of coding in TidalCycles are linked via OSC (Open Sound Control) to generate the visuals. I do the following.

First, I determine the names of the parameters to be sent from TidalCycles to TouchDesigner. For example, let's say we want to send out a numeric value of type Integer "td_s" that specifies the scene number in TouchDesigner. First, add the following statement to "BootTidal.hs"

let td_s = pI "td_s"

Next, add the following statement to the SuperCollider initialization file "startup.scd". This instruction forwards the OSC from TidalCycles to SuperCollider to yet another application, specifying an OSC argument of "\tidalplay" and a port number of "3333".

a = NetAddr.new("localhost", 3333);
OSCdef(\tidalplay, {
arg msg;
a.sendMsg(*msg);
}, '/dirt/play', n);

This OSC is parsed and used by the application generating the visuals. For example, in the case of TouchDesigner, the number can be retrieved by writing the following Python script in OSC In DAT.

from os import times
from time import time

def onReceiveOSC(dat, rowIndex, message, bytes, timeStamp, address, args, peer):
lst = message.split()
try:
td_s = lst[lst.index('"td_s"') + 1]
op('scene_no').par.value0 = td_s
except:
pass
return

This allows for live-coded audiovisual performances with synchronized sound and visuals, as shown in the video below!

youtube

For more details on the code, please refer to the Github repository below.

Tidal Contributions

How do you contribute to Tidal Cycles? What have you worked on?

My focus is on education and the popularization of live coding with TidalCycles. I give lectures at universities on the central theme of live coding. The first half of the class covers the basics of live coding with Sonic Pi, and the second half is a full-scale live coding performance using TidalCycles. This type of lecture is rarely offered in Japan and has been well received.

What motivates you to work on Tidal?

The appeal of Tidal is its ability to generate very complex and diverse music and sounds with a few simple codes. The scalability of samples and instruments is also attractive.

Music

Tell us about your livecoding music.

As I mentioned in the Livecoding section, I am interested in audio-visual expression through livecoding. In addition to that, I am interested in rhythmic expressions that sound natural but are a little bit twisted. For example, I am interested in polyrhythms, polymeters, and asymmetrical rhythms.

How has your music evolved since you have been livecoding?

Livecoding has made me more sensitive to rhythmic structure than before. I used to use a lot of simple four-beat repetitions, but I have started to create rhythms with more complexity.

What samples or instruments do you like to work with?

I use the sound samples and instruments included in SuperDirt as well as adding my own original samples and instruments. I have made them available in the following Github repository.

What projects are you currently working on or planning? What's next?

I am currently working on live coding of laser beams. I hope to show the results of my various experiments on Algorave. The current status is as shown in the video below.

youtube

https://www.youtube.com/shorts/ITRwjJPO2dY

Comments: Club Tidal Forum Thread

Tidal CyclistFelix
akafroos
LocationFrance / Germany
Years with Tidal1 yrs
Other LiveCoding envStrudel
Music available onlineYouTube
Code onlineGitHub
Other music/audio sw/hwAbleton, Trumpet, DIY Synth
CommentsClub Tidal Forum Thread

srudel wac

Livecoding

What do you like about livecoding in Tidal? What inspires you?

There are many things that inspire me.. I generally like the minimalistic, text-based approach to music making, where everything is visible at all times on one screen. When I started making music with an MPC1000, menu-diving was a key part of the process. A similar thing can be said about DAWs like Ableton (and Push), where there are many different UI layers and hidden items. Combining Tidal's simplistic interface with a terse and nestable syntax, it becomes a powerful tool full of rabbit holes to explore. Also, I like the fact that it is open source and thus hackable + the community around it is really refreshing.

How do you approach your livecoding sessions?

Being fairly new to livecoding, I don't have a goto approach, but I tend to either just start with something really simple and go with the flow, or I am exploring a specific function or idea and build on that. When I code for myself, I don't pay as much attention to the overall flow of the "performance", but rather try to find a loop that I like to listen to. I guess much of my approach is still influenced by many years of making beats with a more traditional setup. That might change though...

What functions and coding approaches do you like to use?

I am really into chord voicings and "harmony hacking". While I also like music with simpler / less / no chords, I sometimes miss the rich harmonic colors of the past. Writing (and changing) chord progressions in a DAW can be tedious, which is probably one of the reasons why they faded in general. If you don't play the piano fluently, you cannot quickly jot it down.. In a live coding setting, chord progressions and voicings can be automated and simulated, which has great potential. This is especially fun with arpeggios, for example:

"<C^7 Dbo7 Dm7 C7>"
.voicings('lefthand') // voice chords
.arp("0 3 <2 0> [1 3]".iter(4))
.add(perlin.range(0,.5))// pitch warble
.add("<0 12>/16,.1") // hippie chorus
.sometimes(add("12")) // vary octaves
.almostNever(ply("2")) // little rhythmic glitches
.note().s('sine')
.decay(.125).gain(.8)
.sustain(sine.range(0,.5).slow(32))
.jux(rev).room(.8).fast(3/4)

Open in Strudel REPL

A while back I wrote 2 posts about voicing dictionaries and voicing permutation, which now partly found their way into Strudel.

Do you use Tidal with other tools / environments?

Most of the time, I use Strudel to write Tidal patterns. Sometimes I visit my friend Lui Mafuta where I use livecode stuff via MIDI. It's also fun to add some trumpet notes on top.

Tidal Contributions

How do you contribute to Tidal Cycles? What have you worked on?

In the last year, I was all in on developing Strudel! I was exciting to see this lovely thing grow into what it is now. Maybe you're interested in the whole story and the recap after 1 year.

What led you to work on Tidal?

Long before I found Tidal, I wanted to build a hackable backing track player. I've spent many hours practising the trumpet using iReal Pro, which is a popular practise software in the jazz / pop / improvised music sphere. I always dreamed of a software that could generate such tracks from minimal input (just chord progressions) whilst being able to freely control the musical style. After having built several prototypes, I still was not satisisfied. Luckily, I found Tidal and its emerging JavaScript port, whose flexible abstractions are perfect to implement such a thing. Being more involved in computer music now (practising trumpet less :P), the dream from a hackable backing track player morphed into a more general dream of an instrument that allows improvising electronic music, which is already becoming a reality!

Music

Tell us about your livecoding music.

I am still dipping my toes in, so far I am mostly translating and recontextualize things I've done prior to livecoding. For example, I've created a video album of hip hop beats created with strudel. Apart from that, I really like making music with frequencies only, mostly using pure intervals.

How has your music evolved since you have been livecoding?

I am starting to appreciate the glitch! It will probably get worse..

What samples or instruments do you like to work with?

Samplewise, I love to sample single notes and sounds of old recordings, for example, I've used the first note of this lovely album for the pluck sound in the last link above. High quality sample banks are cool, but there is something special about single sample repitches, maybe they just trigger tiny doses of nostalgia to my inner child, which consumed wavetable synthesis while playing super nintendo for hours.

What projects are you currently working on or planning? What's next?

Still busy hacking on Strudel! I am not the type to plan too far ahead, but I am excited of what's to come

Some non-livecoded music I did as Puste using mostly the trumpet:

Thanks

Last but not least huge thanks to all the people that are part of this space! Special thanks to Alex for building not only Tidal as a software but also as a community, making the world of digital music making a little less boring, one cycle at a time :)

guy with fatty hair

Comments: Club Tidal Forum Thread |

Tidal CyclistMartin Gius
akapolymorphic_engine
LocationVienna
Years with Tidal3 yrs
Other LiveCoding envSuperCollider, Hydra, ORCA
Music available onlineBandcamp
Code onlineGitHub
Other music/audio swReaper, PureData, Audacity
CommentsClub Tidal Forum Thread

Livecoding

What do you like about livecoding in Tidal? What inspires you?

I find the way Tidal allows me to approach music in a structural way fascinating. I like it's concise but still verbose syntax, especially combined with the mini-syntax.

How do you approach your livecoding sessions?

When I make music on my own, I like to start out with simple rhythmic patterns and start to layer them with different versions of themselves (slower & lower / faster & higher / ..). Now apply the MI clouds effect and you can have fun for hours adjusting the parameters! (Note: see the clouds section in the Mi-UGens page of the User docs.)

I also like to use a traditional game controller and map the controls to conditional functions or effects in the code. For example, playing a drum pattern twice as fast when I press the 'A' button, or adjust the pan according to a joystick. I like the thought that I am programming the functionality of a game live, while I am also playing it.

What functions and coding approaches do you like to use?

Probably my most used Tidal functions are layer and while. I also use the control bus feature a lot to manipulate the FX of longer sounds. I really like how randomness in Tidal works and how easy it is no generate arbitrary, but repeating sequences or rhythms.

Here is an example of a jungle inspired, abstract dance track. To make a four cycle loop, evaluate the line

all $ timeLoop 4 . (rotL 4)

and change the number in rotL to shift the pattern. Try to play around with the parameters of the clouds effect aswell, but be careful, it might get loud! :)

let
setbpm x = setcps (x/60/4)
_add :: Time -> Pattern a -> Pattern a -> Pattern a
_add t value pat = slow (pure $ 1+t) $ timeCat [(shift,pat),(1-shift, value)]
where shift = 1 / (t + 1)
add :: Pattern Time -> Pattern a -> Pattern a -> Pattern a
add pt x y = innerJoin $ fmap (\t -> _add t x y) pt

setbpm 160

all $ timeLoop 4 . (rotL 4)

all $ id

d1
$ while "t(4,16)" (|+ krush 1)
$ while "[0 | 1]*16" (superimpose (plyWith 4 (|* speed 1.25) . slow 2))
$ layer [id
,\x -> degradeBy (segment 16 perlin)
$ slow 2
$ x
# speed 0.75
# shape 0.1
,\x -> add "[0.5 | 0.25]*4" (s "jungbass:1" # speed 0.8 # shape 0.2 # krush 2)
$ x # speed "[2 | -2]*8"
]
$ s "[drum drum:1 [~ drum] drum:1, drum:3*[[8 | 16]*4]]"
# krush 2
# cloudswet 1
# cloudsgain 1
# cloudspitch (segment 16 $ smooth "[-1 | 1 | 0]*16")
# cloudstex (segment 16 $ smooth "[0.3 | 0.1 | 0.9]*4")
# cloudspos "[0 | 1]*8"
# cloudssize 0
# cloudsfb 0.3
# cloudsspread 0
# cloudsdens 0
# cloudsrvb 0
# cloudsfreeze 0

Do you use Tidal with other tools / environments?

I like to use Tidal together with Hydra and Vimix and like to use a game controller for external hardware.

Tidal Contributions

How do you contribute to Tidal Cycles? What have you worked on?

  • I had the opportunity to work on Tidal as part of the Haskell Summer of Code 2021. There, I mainly worked on packaging Tidal to allow users to use it without an installation of the whole Haskell environment. This led to me developing a whole code editor/interpeter with some features especially designed for Tidal, like the display of which patterns are playing/muted, the current cps/bpm and the ability to control all features of the editor via OSC.

  • I'm also working on the tidal-listener which also provides a standalone intrepreter that editor plugins etc. can use as an alternative to ghci.

  • Now I am mostly working on things that are related to the mini-notation and how it is parsed and interpreted. Most notably, I found a way to make the chord notation patternable and made it easier to add new custom chord modifiers.

What motivates you to work on Tidal?

Curiosity of the inner workings of Tidal and the great community!

Music

Tell us about your livecoding music.

  • I often improvise together with people who play more traditional instruments. I find it very interesting to use microphones to get what the others are playing as an input that I can manipulate through coding.
  • I'm also interested in multi-channel sound / acousmatic music and the possibilities of live-coding in this context. I think live-coding could be a great tool to be able to precisely control an acousmonium (a speaker orchestra, where each speaker has it's seperate channel). This means to not just make the sounds that are being heard, but also to distribute them across the speakers in real-time (this is often called diffusion).

What samples or instruments do you like to work with?

Recently, I like to use very tiny grains of samples and process them with Tidal. What I like about this approach is that it is easy to manipulate and add effects to each grain individually. I also like to record my own samples with various microphones.

What projects are you currently working on or planning? What's next?

  • I would like to work on a bigger scale AV performance using Tidal, Hydra and Vimix together, to create something like a short film.
  • I'm also working on an interactive sound installation where I will probably use Tidal to generate the sound.
  • I'm working on a new acousmatic piece for a composition competition.

Other

I'm currently working on a live-coding language that will extend the mini-notation to a full programming language. It is still in early development, but maybe somebody is interested in helping me out! I'm working on it here.

youth photo with computer

Comments: Club Tidal Forum Thread