fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Real-time Controls + Audio Streaming


From: Element Green
Subject: Re: [fluid-dev] Real-time Controls + Audio Streaming
Date: Sat, 3 Nov 2012 11:59:47 -0700



On Sat, Nov 3, 2012 at 6:44 AM, David Pearah <address@hidden> wrote:
Hello. 

I am an experienced web programmer (_javascript_, HTML, Flash, Java) but relatively new to MIDI. I am researching a project requires both 1. real-time audio synthesis and 2. real-time audio streaming from server to client, but I wanted to get insight/direction before moving forward with FluidSynth Very simply, this is what I'm trying to build:
  • USER EXPERIENCE
    • Web app (i.e. runs in most Mac + Windows desktop browsers) that plays MIDI files with high-quality grand piano SoundFont
    • Real-time controls for speed and pitch (along with typical controls for volume, pause/play, etc.)... so there's no option to pre-generate audio files since you can't anticipate what pitch/key combination will be requested in the middle of playing the song.
  • ARCHITECTURE:
    • My assumption is that it is NOT a good idea to have the softsynth running in the browser (computationally intense, large SoundFont download, install fat client vs. web app, etc.)
    • So this leads me to believe that the softsynth should be running in real-time on the server, generating audio that can be streamed to the browser app, which would be very lightweight since all it would need to do is play streaming audio
    • The controls for speed + pitch would actually go back to the server, and in real-time cause the softsynth to generate the corresponding audio which would be streamed to the web client
So my questions are:
  1. Can Fluidsynth be installed on a server and generate real-time audio fast enough to give up with playback, i.e. given typical server CPU and single piano instrument, is it reasonable to expect that FluidSynth can generate audio in faster than real-time?
  2. Can the FluidSynth API be accessed in mid-song to change the pitch and velocity, or does it have to start over from the beginning of the song?
  3. Do you know of anyone who has taken the audio output from FluidSynth and streamed it to another client?

I greatly appreciate your time to review these questions and hopefully point me in the right direction. And for those who are interested, I'm willing to pay for a short-term development contract to help get this project started.

Thanks!

-- Dave


Hello Dave,

I have also been interested in such an application of FluidSynth for some time now, for use with the online SoundFont instrument database project that I was working on.  In that case it would be used to preview SoundFont instruments with a _javascript_ or Flash based keyboard interface for playing notes, etc.  Seems like a similar application to yours.

My own thoughts around the architecture of such a system is the following:

* Would be a server based solution, with a server application written in C.

* Server application would handle spawning FluidSynth rendering threads using libFluidSynth to stream to users.

* Server would provide a FastCGI interface for control FluidSynth instances.

* Client application interface would be _javascript_ or Flash and use AJAX to control the FluidSynth instance.

* Server application would interface to a Shoutcast server to stream encoded MP3 data.


One thing about such a system is it would require quite a lot of CPU and probably would not scale very well.  So in cases where there were a lot of users (say more than a couple), a distributed server farm like topology would be needed (synthesis servers, MP3 encoding servers, etc).

I am interested to hear the details of your offer to pay for such a project.

Best regards,

Element Green

 

reply via email to

[Prev in Thread] Current Thread [Next in Thread]