fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] DSP testing


From: David Olofson
Subject: Re: [fluid-dev] DSP testing
Date: Thu, 1 Apr 2004 10:18:20 +0200
User-agent: KMail/1.5.4

On Thursday 01 April 2004 03.00, Tim Goetze wrote:
[...]
> >I've thought of that. But when the audio thread picks up the
> > voice, how can you guarantee that the soundfont and it's sample
> > cache are still in the same state (there could have been MIDI
> > program change in between that messes things up).
>
> i've been thinking about this for a long time, and came to the
> conclusion that if the soundfont is not ready to play a note in
> realtime, that noteon must be dropped silently. allowing arbitrary
> jitter in the timing of notes is - musically - even worse than
> being quiet for a brief time after a program change.

That would depend on what kind of sounds you're dealing with... In the 
case of strings, pads and other sounds that tend to be used for long 
notes, it's usually much worse if the notes are dropped.

Anyway, the usual approach is to handle Program Change separately, and 
make sure NoteOns are always hard RT safe, except when a Program 
Change is in progress. That is, when you get a Program Change, you do 
all the soft RT/non RT stuff, allocate objects, buffers and whatnot, 
so that subsequent events can be handled entirely in the audio thread 
once the patch is "loaded".

Many h/w synths disable voices and delay or drop NoteOns after a 
Program Change, but I guess people expect wavetable and virtual 
analog synths that are less than ten years old to be totally hard RT 
in this respect... Samplers are different, as they have to load tons 
of data from disk after a Program Change, even if they're "direct 
from disk" samplers. (That might change eventually, though. There 
have been computer mainboards with battery backed up DRAM for years, 
and flash technology is probably fast enough now for playing directly 
from flash.)


> if a soundfont manager needs time to process a program change, this
> must not interfer with either the audio thread (obviously) or the
> MIDI feeding thread. the latter is not so obvious,

I think the latter is pretty obvious... What's less obvious to me is 
why there is a MIDI feeding thread at all. (Audiality does all MIDI 
processing in the audio thread. Live MIDI input is 
non-blocking/polling.)

The only point with a separate MIDI thread is when you want to deal 
with live MIDI control (external sequencers, MIDI controllers and 
stuff) and want the best possible timing accuracy regardless of audio 
latency. (That is, constant MIDI->audio latency rather than "random" 
jitter.) You'll have to run the MIDI thread at higher priority than 
the audio thread, and you'll need a reasonably accurate timer 
("multimedia" timers, performance counters or whatever) for the 
timestamping.


> but if we allow
> the soundfont loader to block (operate in) the MIDI thread, timing
> accuracy is ruined which we can - i think - not tolerate.

Right. IMHO, this is especially important for games/multimedia 
engines, as you can't rely on shutting the audio subsystem down just 
to load/render some sounds. You're supposed to be able to load sounds 
and music while playing other stuff.


> we can conclude that a soundfont manager that does disk access (or
> other potentially blocking calls for that matter) needs to run in
> its own thread since it can't operate in either audio or MIDI
> context if we want jitter-free timing and no audible dropouts.
>
> the sf manager thread then needs to be notified of program change
> via another decoupling (simply reusing the FIFO prototype comes to
> my mind) and go about its business in the background.

In Audiality, I'm going for this approach:

        * MIDI processing, sequencing, audio processing etc
          is all done in the hard RT audio thread.

        * Operations that are not RT safe are sent off to
          worker callbacks that run in a soft RT thread.


As an example, an FX plugin that needs to reallocate buffers and stuff 
when certain parameters change can do the job in another thread, and 
throw the new buffers in when they're ready. Loading/rendering of 
off-line instruments can be handled the same way. (It's already done 
outside the audio thread, but instrument loading/rendering cannot be 
triggered by the audio thread yet.)


> when the next noteon comes in, the manager hopefully has all the
> preset data ready. if not, we simply drop the noteon as discussed.
> if yes, we can play the note right away, and it doesn't really
> matter whether we make the decision in MIDI or audio context.

Right.

BTW, it's really rather useful to have a way of telling whether or not 
a sound or song is ready to play. If you can just send Program Change 
events without getting any "done!" response, you have to rely on the 
same approach you use with h/w synths and samplers: Have a bar of 
silence after a Program Change event... (Which doesn't even work 
reliably with samplers, as sounds may take ages to load.)


//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
|  Free/Open Source audio engine for games and multimedia.  |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
   --- http://olofson.net --- http://www.reologica.se ---





reply via email to

[Prev in Thread] Current Thread [Next in Thread]