fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] API design: fluid_synth_process()


From: Ceresa Jean-Jacques
Subject: Re: [fluid-dev] API design: fluid_synth_process()
Date: Thu, 3 May 2018 15:36:34 +0200 (CEST)

Hi,

 

> One might allow buffers to alias. Suppose
>
> "synth.audio-channels" is 1
> "synth.effects-channels" is 2
>
> The user would call fluid_synth_process(synth, 64, 0, NULL, 6 out) where out:
>
> out[0] = out[2] = out[4] = left_buf;
> out[1] = out[3] = out[5] = right_buf;

 

1)Yes, when nout/2 (3) is above rendered synth.audio-channels(1), the idea of expanding the first part of out array(from 0 to 1) into the remainder (from 2 to 5) is fine.

This will allow all load speakers always reproducing something (otherwise (without expansion) load speakers 2 to 5 will be silent)(however to keep equal balancing the user should paid

attention to choose the remainder count of load speaker (4) always multiple of the first part count (2). All this are detail only under the user responsability.)


> and left_buf and right_buf are zeroed buffers to which fluidsynth would simply add the synthesized audio.
>
> If the user only passes 2 buffers, the channels would wrap around and the result would be the same:

Yes,

2)Conversely when rendered synth.audio-channels count (i.e 4) is above nout/2 count (i.e 2), the idea to wrap and mix the remainder rendered audio-channels count (from 2  to 3) into the same out buffer that contain the first two audio-channels part count (from 0 to 1) also is fine. This will allow to hear all rendered audio-channels (with no missing MIDI instrument) even when the count of supplied final out buffer (nout/2) is below synth.audio-channels. These idea make fluid_synth_process() tricky and flexible.

 

Also, for the fx rendered channels i was thinking using the same flexibility.

I mean, using unused {nin count, in array} parameter as  {nout_fx count and out_fx array} (as you already proposed) to mix internal rendered fx-channels in out_fx array (the same

way as mixing internal dry audio-channels in out array). This way the user can choose to get the couple {rendered dry audio, rendered fx audio} in a couple of distinct ouput buffers {out, fx_out}.

Note however, if the user choose fx_out to be the same array than out, all rendered audio (dry and fx) will be mixed in a unique array out. Of course the way used by

fluid_synth_process() to do the expansion and wrapping described in (1) and (2) should be clearly documented.

 

details notes:

- nout count (and nout_fx count) supplied by the user should be alway a multiple of 2 (because of rendered stereo audio frame). (This is not relevant when the the user

  intend to produce a surround rendered synth).

- Of course, memory allocation must be avoided in fluid_synth_process().

jjc

> Message du 02/05/18 21:45
> De : "Tom M." <address@hidden>
> A : "Ceresa Jean-Jacques" <address@hidden>
> Copie à : address@hidden
> Objet : Re: [fluid-dev] API design: fluid_synth_process()
>
> > You mean 5.1 (i.e 6 channels)
> > Sorry i dont' understand this index 5 ?.Does Surround could follow any others know buffers ?
>
> I think each audio channel should receive the full spectrum. fluidsynth shouldnt be in charge of rendering the subwoofer channel. But these are implementation details for surround audio, which is way beyound the scope of this talk. Just wanted to point out the channel layout for fluid_synth_process() if the user wishes to instantiate a surround capable synth.
>
>
> > Mapping of dry and effect to output buffers is not easy to solve. I don't think that fluidsynth should impose his strategie.
>
> One might allow buffers to alias. Suppose
>
> "synth.audio-channels" is 1
> "synth.effects-channels" is 2
>
> The user would call fluid_synth_process(synth, 64, 0, NULL, 6 out) where out:
>
> out[0] = out[2] = out[4] = left_buf;
> out[1] = out[3] = out[5] = right_buf;
>
> and left_buf and right_buf are zeroed buffers to which fluidsynth would simply add the synthesized audio.
>
> If the user only passes 2 buffers, the channels would wrap around and the result would be the same:
>
> out[0] = left_buf;
> out[1] = right_buf;
> fluid_synth_process(synth, 64, 0, NULL, 2 out)
>
>
> > Note) Also, i think that in the future, the actual internal "MIDI channel to output buffer" hard coded mapping should be replaced by an API.
>
> Shouldnt be the user in charge of properly mapping any multichannel rendered output?
>
>
> Tom
>

reply via email to

[Prev in Thread] Current Thread [Next in Thread]