fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[fluid-dev] Progress = YES (was Re: Is there any progress???)


From: Josh Green
Subject: [fluid-dev] Progress = YES (was Re: Is there any progress???)
Date: 18 Mar 2003 16:37:23 -0800

On Tue, 2003-03-18 at 15:16, Peter Hanappe wrote:
> 
> Streamed sample playback is on my todo list for quite a while now.
> However, the first goal was to develop a synthesizer that is MIDI
> and SoundFont compatible. Version 1.0 pretty much satisfies that goal.
> As soon as we split of the development branch (version 1.1.x) we can
> start adding sample streaming.
> 

Thanks for clarifying that.

> Maybe the better thing to do is to improve the current
> sfloader and sample API and add the intelligence of the sample
> streaming (i.e. caching, preloading) in libInstPatch rather than in
> FluidSynth.
> 

It would seem like something that could be implemented with the sfloader
sample API with some callback functions. As far as Swami, I think the
streaming functionality would be part of the libswami FluidSynth plugin
rather than libInstPatch, whose scope is currently patch file specific.
Sample streaming functionality is not likely to be something very
re-usable at any rate.

Preloading is probably possible now using the preset notify function
that you just added. So when a preset first gets selected (bank:program
change) the application (in this case Swami) could preload a chunk of
all samples of that preset. The only thing I think missing is a way to
stream the actual sample data from the app to FluidSynth. I guess there
are probably a couple ways to go about this though, both using callback
methods to fetch the audio?

If it was a continous audio stream then it would be up to the app to do
its own looping and would have to continously feed audio through a
callback (even for small looped samples), although perhaps the callback
could return with a code to stop audio for say the end of a single shot
sample. Prototype (naming convention probably wrong):

/**
 * @sample: Sample structure allocated by app
 * @size: Number of samples requested
 * @buffer: Buffer to store the samples
 *
 * Returns: Number of samples transferred to buffer. Any size less than
 * @size will cause the stream to stop.
 */
typedef int (*FluidSFLoaderStreamContinous)(iiwu_sample_t *sample,
             guint size, fluid_float_t *buffer);


The other method would be that FluidSynth would manage the sample and
looping parameters and would pass a sample position along with the
number of samples to the callback function when fetching audio.

typedef void (*FluidSFLoaderStreamSample)(iiwu_sample_t *sample,
              guint offset, guint size, fluid_float_t *buffer);


Does that make any sense? Perhaps both methods should be implemented? As
far as sample caching. Is it necessary to implement one's own caching
mechanism (rather than rely on the OS?) I'm sure the answer to this is
probably yes.

> 
> >>From what I currently know of FluidSynth goals, it currently falls short
> > of the linuxsampler goal of multi patch format support. FluidSynth is
> > likely to remain SoundFont based, but its sfloader API that I use to
> > interface Swami's instruments to it is generic enough to allow any
> > format to be synthesized (within the constraints of SoundFont
> > synthesis). 
> 
> I'm of the opinion that it's better that FluidSynth implements the
> SoundFont synthesis model as efficiently as possible instead of trying
> to become a swiss army knife for wavetable synthesis. I think the users
> benefit more from having the choice between several small but optimized
> synthesizers rather than one big, buggy synthesizer. As you said,
> additional patch types can be supported through libInstPatch, if they
> map reasonably well to the SoundFont synthesis model.
> 
> Peter
> 

Sounds good to me :) Cheers.
        Josh





reply via email to

[Prev in Thread] Current Thread [Next in Thread]