linphone-developers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Linphone-developers] ortp: processing incoming stream


From: Simon Morlat
Subject: Re: [Linphone-developers] ortp: processing incoming stream
Date: Fri, 27 Nov 2009 17:25:28 +0100

Ok I understand.
The correct way of setting off the jitter buffer is to do 
rtp_session_enable_jitter_buffer(session,FALSE);
In this mode you'll get all packets regardless that they are late or
not.

About SSID: don't you meant SSRC ? Yes you should expect a random value.
How did you see this 0x29 value ? gdb ? wireshark ? 

Simon

Le vendredi 27 novembre 2009 à 15:39 +0100, Petr Kuba a écrit :
> Hi Simon,
> 
> Thanks for your response.
> 
> I've tested your approach and it works well with low jitter of an 
> incoming stream and adaptive jitter enabled.
> 
> However, if I disable the jitter buffer I miss quite a lot of packets 
> (jitter is up to 10ms) and I have to use rtp_getq_permissive(). Then I 
> receive all the packets.
> 
> I'm handling situations where I just forward packets from one RTP stream 
> to another and I don't want to delay the packet by using jitter buffer. 
> And I don't want to miss packets event if they are e.g. 40ms late.
> 
> So, I have several questions:
> 
> 1) What is the correct way to receive all the packets without modifying 
> ORTP source code if I don't use jitter buffer? It would be nice to 
> obtain a packet immediately after it is delivered.
> 
> 2) I'm not sure how to correctly configure jitter buffer. What is the 
> correct way to switch the jitter buffer off? Calling 
> rtp_session_set_jitter_compensation(s, 0) or calling nothing?
> 
> 3) When switching the jitter buffer on do I have to call 
> rtp_session_set_jitter_compensation() before 
> rtp_session_enable_adaptive_jitter_compensation()? And what happens if I 
> call rtp_session_set_jitter_compensation() after 
> rtp_session_enable_adaptive_jitter_compensation()? Does it make sense to 
> combine these two methods?
> 
> 4) I've noticed that all my outgoing stream have SSID=0x29. I don't call
> rtp_session_set_ssrc() so I would expect it to have a random value. Do 
> you have any idea what is wrong?
> 
> Thanks,
> Petr
> 
> 
> Simon Morlat wrote:
> > Hi Petr,
> > 
> > Your application should increment the timestamp using its theoritical
> > value (ex: 10 ms if your application is supposed to wake up and do
> > processing every 10 ms), no matter if it is a bit late compared to real
> > time or not.
> > The important is only that on the long term, the timestamp and the time
> > elapsed are equivalent (no drift).
> > 
> > Simon
> > 
> > Le vendredi 20 novembre 2009 à 14:43 +0100, Petr Kuba a écrit :
> >> Hello,
> >>
> >> I'm experiencing the following problem:
> >>
> >> Since the timer we use for calling rtp_session_recvm_with_ts() is not 
> >> 100% precise (mainly in windows) it happens quite often that the time 
> >> between two calls of rtp_session_recvm_with_ts() is (a little bit) 
> >> longer that 20ms. Then it is quite often that there are already two 
> >> packets older than current timestamp in the ortp queue.
> >>
> >> However, rtp_session_recvm_with_ts() calls rtp_getq() which returns only 
> >> the last packet and discards all older packets. I believe that this is 
> >> not good idea when we are only few ms late.
> >>
> >> In my case it would be better to call rtp_getq_permissive() instead of 
> >> rtp_getq() not to miss any packets.
> >>
> >> What do you think? Do I miss any important idea of this algorithm? What 
> >> is the reason for discarding a packet that we wouldn't discard e.g. just 
> >> 1ms earlier?
> >>
> >> Thanks,
> >> Petr
> >>
> >>
> > 
> 






reply via email to

[Prev in Thread] Current Thread [Next in Thread]