fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] API design: fluid_synth_process()


From: Ceresa Jean-Jacques
Subject: Re: [fluid-dev] API design: fluid_synth_process()
Date: Thu, 3 May 2018 00:26:07 +0200 (CEST)

>jj >Note) Also, i think that in the future, the actual internal "MIDI channel to output buffer" hard coded mapping should be replaced by an API.
>
>Tom>Shouldnt be the user in charge of properly mapping any multichannel rendered output?

 

I prefer thinking of "MIDI channels mapping" (done in rvoice_buffers_mix()) a different subject that mapping any "multichannel rendered (i.e dry, effects)" to final output that should be done in fluid_synth_process().

 

About MIDI Channel mapping subject: assume "synth.audio-channels" is 2, actually rvoice_buffers_mix() will do this (using hard coded formula, see picture fluid_rvoice_buffers_mix_mapping_1.jpg) :

MIDI channels 0,2,4,6... are mixed  to internal audio channel 0.

MIDI channels 1,3,5,7... are mixed to internal audio channel 1.

Actually this mixing is imposed by the internal mixer hard coded mapping . If the Application wanted (for example) to get MIDI channel 2 mixed to audio channel 1, this is not possible.

The only easy way i see for an application to choose which instrument will be mixed in which speakers is a mapping API (to replace the actual hard coded MIDI channels mapping

done in rvoice_buffers_mix()).

I am aware this is also (like surround) out the scope of the actual fluid_synth_process() talk which relates only about "dry" and "effects"  mapping to final ouptut channels. Simply the need of "easy  mapping (as possible)"  is common to both subject. I hope that any reader will not be confused by these 2 subjects. To avoid any possible confusion for now, i will not talk anymore about "MIDI channel mapping".

jjc.

 

 

> Message du 02/05/18 21:45
> De : "Tom M." <address@hidden>
> A : "Ceresa Jean-Jacques" <address@hidden>
> Copie à : address@hidden
> Objet : Re: [fluid-dev] API design: fluid_synth_process()
>
> > You mean 5.1 (i.e 6 channels)
> > Sorry i dont' understand this index 5 ?.Does Surround could follow any others know buffers ?
>
> I think each audio channel should receive the full spectrum. fluidsynth shouldnt be in charge of rendering the subwoofer channel. But these are implementation details for surround audio, which is way beyound the scope of this talk. Just wanted to point out the channel layout for fluid_synth_process() if the user wishes to instantiate a surround capable synth.
>
>
> > Mapping of dry and effect to output buffers is not easy to solve. I don't think that fluidsynth should impose his strategie.
>
> One might allow buffers to alias. Suppose
>
> "synth.audio-channels" is 1
> "synth.effects-channels" is 2
>
> The user would call fluid_synth_process(synth, 64, 0, NULL, 6 out) where out:
>
> out[0] = out[2] = out[4] = left_buf;
> out[1] = out[3] = out[5] = right_buf;
>
> and left_buf and right_buf are zeroed buffers to which fluidsynth would simply add the synthesized audio.
>
> If the user only passes 2 buffers, the channels would wrap around and the result would be the same:
>
> out[0] = left_buf;
> out[1] = right_buf;
> fluid_synth_process(synth, 64, 0, NULL, 2 out)
>
>
> > Note) Also, i think that in the future, the actual internal "MIDI channel to output buffer" hard coded mapping should be replaced by an API.
>
> Shouldnt be the user in charge of properly mapping any multichannel rendered output?
>
>
> Tom
>

reply via email to

[Prev in Thread] Current Thread [Next in Thread]