fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] New development : system clock vs. audio clock


From: Antoine Schmitt
Subject: Re: [fluid-dev] New development : system clock vs. audio clock
Date: Tue, 27 Jan 2009 15:45:58 +0100

Hi Josh and Bernat,

The issue I fixed was for real time rendering, when using the sequencer. And it was related, not only to standard and simpler latency caused by the size of the driver buffer, but because of unexpected behavior from the DSound driver, which, depending on the target hardware and other unknown reasons, would actually request buffers in bulk : it would request 16 buffers in a row, thus multipiying the latency by 16. And this would not be consistent (sometimes 1 buffer would be asked, sometimes 16). I have logs on this. This means that audio was in a way running much ahead of real time.

The result was that the "Sub audio buffer MIDI event processing" issue that Josh mentions was multiplied by 16, resulting in audible irregularities in rythms. IIRC, midi playback is also attached to the system clock, with a timer. So this problem will also happen for midi file playback, not only for sequencer playback. [as a side note, there is a redundancy in code, again, IIRC, between the sequencer and the midifile playback. This could be factored by having for example the midifile playback use the sequencer to insert midi events in the audio stream - end of side note]

I fixed this by branching the sequencer on the audio time (how many samples have elapsed), _and_ by calling the sequencer routine just before filling each audio buffer.

-> I guess that I did not fix this same issue with midifile playback then. -> and also, I reduced the precision to a single buffer length. I did not address sub-buffer precision.
=> I guess this could really benefit an overall cleanup.

As for the question of where to do the processing of the scheduled (whether through the sequencer or through the midifile playback) midi events, I think that the only way to have consistent and reliable rendering is indeed to do it inside the callback from the audio driver, especially if the audio runs ahead of real time.


Le 27 janv. 09 à 03:32, Josh Green a écrit :

I probably shouldn't say too much, until I see what Antoine's solution
is..  But..

On Tue, 2009-01-27 at 03:04 +0100, Bernat Arlandis i Mañó wrote:
It makes sense to me to
process the audio based on the audio playback.  This would lead to
identical playback between successive renders of a MIDI file, which is
what we want.
This could be the only advantage I can think of, but it would be only
reproducible in the same hardware, driver and audio buffer size setup.
If you're thinking on case testing then the only solution is non-RT
rendering.

Indeed, it seems like it is the most useful for non-RT rendering.  I
think the issue that Antoine was originally trying to fix was related to the Windows DSound driver implementation processing a lot more data than
just an audio buffer, which really seems like a driver issue to me.

 I don't see a problem with this change and I think it
would vastly improve things. There might be a little more overhead as far as MIDI event processing, but it would lead to more accurate timing
as well.


This would worsen latency since the core thread would have to do more
work at the critical point where the sound card is waiting for data.

Hmmm.  Not if you are simply using the number of samples played out of
the sound card as a timing source. Or am I still overlooking something.
It seems to me like using a system timer for MIDI file event timing
(something that has different resolutions depending on the system) is
going to be a lot less reliable than using the sound card time.  Again
though, I agree that this probably only benefits MIDI file
playback/rendering.

Besides, I don't think having the MIDI file player depending on the
audio driver is good.


What about just using it as a timing source? I still haven't thought it
all through, but I could see how this could have its advantages.

And, please, this shouldn't be taken as disrespect to Antoine's work,
I'd still have a look at it to see what he has really accomplished.

I think it's cool having this discussion now, since you're the
maintainer and you'll want to have some control in the future
development, it's logical. I'd like to know how good we work it out when
we don't agree. :)

Cheers.

Well I'm not particularly attached to how things go, just as long as we
do the "right thing" (TM) and KIFS (Keep It F...... Simple) ;)

Cheers.
        Josh




_______________________________________________
fluid-dev mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/fluid-dev


++ as






reply via email to

[Prev in Thread] Current Thread [Next in Thread]