I have a non-uniform partitioned convolution working now. Not threaded yet, but I’m not sure whether it even needs to be.

With the project built for release, it can do convolution with a 6 second stereo impulse in real time, using 10% cpu on my macbook.

I compared with Space Designer: running within Logic, the same impulse response used 30% cpu… but that’s with all of Logic running at the same time.

Even so, it looks like my naive non-optimized implementation is performing OK, and if I thread it and replace the tight loops with SIMD instructions it’ll fly!

Feels nice to build something that’s actually fast for a change :D

This was posted 2 days ago. It has 12 notes.

In the end, I found one paper, but it’s fairly disappointing. It boils down to convolving the input signal twice, with the ‘previous’ and ‘current’ impulse responses, and then cross-fading in the time domain between the two outputs.

This was my ‘last resort’ idea all along, and it’s quite reassuring to know that there’s a paper I can cite if I do decide to go this route, but it doesn’t seem like the best solution somehow. The paper even says that ‘the output of the linear time-varying (LTV) system is a mixture of two linear time-invariant (LTI) filter signals which may not correspond to a true intermediate out signal.’

Well, whatever. I can give it a go.

This was posted 3 days ago. It has 5 notes.

I’m thinking about raytracing audio in realtime, having seen (heard?) some demos of the technique. I found a talk about a Quake 3 mod which uses raytracing for audio, so I know it’s possible, but I’d like to make a tool that’s musician-friendly for doing this kind of stuff. Also offline time-invariant impulse responses just aren’t very exciting at this point.

I’ve built a simple partitioned fft convolution (by which I mean a tiny class wrapper around some fftw instances), which in theory should allow me to build a program that does real-time convolution of pre-rendered impulse responses (like space designer or altiverb). It’s not perfect, and it’s not very performant, but it’s good enough for now. I’m fairly sure I can get it down to interactive latency levels if/when I need to.

However, what I really need is an algorithm for doing fast linear time-varying convolution. I don’t know how to do that, and I’m having a hard time researching it.

The key here is ‘fast’ rather than ‘linear time varying convolution’. I know I could do discrete convolution with a different impulse response per input sample, but that would scale horrifically, and I’m sure there’s some kind of fourier trick I can use instead… but I don’t know what that trick would be.

Question time: Does anyone know about programming techniques for fast time-varying filters, or papers I should look up?

This was posted 4 days ago. It has 8 notes.
This was posted 1 week ago. It has 161 notes.
This was posted 1 week ago. It has 224 notes.
This was posted 1 week ago. It has 76 notes.
This was posted 1 week ago. It has 113 notes.

garblefart said: ive been fantasizing about raytraced audio and its potential brilliance for ages. i imagine youre not going down the realtime route?

Nah, I can’t think of a ray of tracing enough rays fast enough. (I could do it on the GPU, but then it wouldn’t be very portable.) At the moment each impulse response takes between a few seconds to a few minutes to render (depending on the implementation and the complexity of the scene). Although I’m still using dumb linear search for collision testing rather than something clever like octrees or bounding interval hierarchies, so maybe that would help.

Also doing it in realtime requires stuff like an interface for defining paths/movements of microphones and sources, and a custom convolution of some kind, which I don’t really feel up to writing.

This was posted 1 month ago. It has 3 notes.
Thought I’d try making my audio raytracer spit out some data that I could visualise… Looking OK so far, I think.

Thought I’d try making my audio raytracer spit out some data that I could visualise… Looking OK so far, I think.

This was posted 1 month ago. It has 63 notes.
Hello to beesandbombs I hope you don’t mind that I stole your idea a little bit. Imitation = flattery and so forth?

Hello to beesandbombs I hope you don’t mind that I stole your idea a little bit. Imitation = flattery and so forth?

This was posted 1 month ago. It has 162 notes.

I mean, Python is a nice enough language that I’m willing to use it voluntarily, but on the other hand what if my brain turns into a potato?

This was posted 1 month ago. It has 9 notes.
So having written versions of my raytracing impulse response generator in C++ (twice, one cmd-line and one GUI) and Haskell (also twice because I didn’t know Haskell the first time), I decided that the fifth time would be the charm, and I’m rewriting in Python. The new repo is over here if you’re interested.
Why Python? Mainly because I thought a more interactive language would be conducive to easier debugging and faster mocking-up of ideas. I do miss my compile-time type-checking though.
I managed to get the actual raytracing part up and running in a day. It exports json in the same format as the last Haskell version, so the screenshot above is a comparison of the same scene, rendered in Python (left) and Haskell (right), both flattened with the Haskell flattener. Everything looks pretty good so far, but the files look like they have opposite phase, so there’s obviously a mistake in one of the versions somewhere… (note: the files shouldn’t be identical, because the ray directions used are randomly generated on each run.)
My immediate goals are to try to improve the memory usage and performance (the inner loop is probably going to be written in Cython eventually), and to get some kind of interactive visualiser up and running so that you can actually watch the rays bounce around as they’re rendered out.

So having written versions of my raytracing impulse response generator in C++ (twice, one cmd-line and one GUI) and Haskell (also twice because I didn’t know Haskell the first time), I decided that the fifth time would be the charm, and I’m rewriting in Python. The new repo is over here if you’re interested.

Why Python? Mainly because I thought a more interactive language would be conducive to easier debugging and faster mocking-up of ideas. I do miss my compile-time type-checking though.

I managed to get the actual raytracing part up and running in a day. It exports json in the same format as the last Haskell version, so the screenshot above is a comparison of the same scene, rendered in Python (left) and Haskell (right), both flattened with the Haskell flattener. Everything looks pretty good so far, but the files look like they have opposite phase, so there’s obviously a mistake in one of the versions somewhere… (note: the files shouldn’t be identical, because the ray directions used are randomly generated on each run.)

My immediate goals are to try to improve the memory usage and performance (the inner loop is probably going to be written in Cython eventually), and to get some kind of interactive visualiser up and running so that you can actually watch the rays bounce around as they’re rendered out.

This was posted 1 month ago. It has 15 notes.

So I just managed to install hsndfile on OS X Mavericks and it was a bit complicated so I thought I’d write a thing about it just in case anyone else ever decides that they really need extensive audio format support in Haskell.

  1. First I installed libsndfile. Downloaded it, cd'd to the downloaded dir, ran ./configure, then sudo make install. Install failed looking for Carbon.h.
  2. I followed the advice here, modifying libsndfile_src/programs/Makefile so that the CFLAGS declaration included the flag -I/Applications/Xcode.app/Contents /Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk /System/Library/Frameworks/Carbon.framework/Versions/A/Headers/.
  3. After making this change and re-running sudo make install, the library installed properly (in /usr/local/lib and /usr/local/include).
  4. Then I ran cabal install hsndfile. There was an error about needing c2hs.
  5. Ran cabal install c2hs. The c2hs sources were downloaded and built, and the executable installed in ~/Library/Haskell/ghc-7.6.3/lib/c2hs-0.17.2/bin with a symlink in ~/Library/Haskell/bin.
  6. Running cabal install hsndfile still failed. Apparently the c2hs install location is non-standard, and after mucking around with my PATH environment variable I ended up creating the directory ~/.cabal/bin and copying the c2hs executable into it, then running export PATH=~/.cabal/bin:$PATH.
  7. Now running cabal install hsndfile got a bit further, but c2hs itself failed processing stdio.h. This is a bug with c2hs (more info here), but luckily there’s a workaround:
  8. As root, I added the following definitions to /usr/local/include/sndfile.h before the #include <stdio.h>: 
    #define __AVAILABILITY__
    #define __OSX_AVAILABLE_STARTING(a,b)
    #define __OSX_AVAILABLE_BUT_DEPRECATED(a,b,c,d)
    #define __OSX_AVAILABLE_BUT_DEPRECATED_MSG(a,b,c,d,e)
  9. Ran cabal install hsndfile again, and everything worked this time. Hooray!
  10. At this point I felt some emotional turmoil due to the harsh contrast between Haskell’s functional purity and the sheer ugliness of this hacked-together precarious house-of-cards ‘solution’. Maybe I’ll go and write some poetry or drink to forget.

Disclaimer: Reuben does not condone the use of alcohol as an inhibitor for emotions or memories. Reuben also doesn’t drink so the drinking to forget probably won’t happen. He also doesn’t write poetry. He’s basically just a machine that struggles with programming languages and plays that ‘2048’ game far too much.

This was posted 1 month ago. It has 1 note.

Anonymous said: cyan sucks

SHOTS FIRED

This was posted 1 month ago. It has 18 notes.
This was posted 2 months ago. It has 1,041 notes.