[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Swftools-common] issue when embedding audio
From: |
Pablo Rodríguez |
Subject: |
Re: [Swftools-common] issue when embedding audio |
Date: |
Sun, 26 Oct 2008 10:26:29 +0100 |
User-agent: |
Thunderbird 2.0.0.17 (X11/20080926) |
Thanks for the reply, Matthias.
Matthias Kramm wrote:
On Sat, Oct 25, 2008 at 03:18:33PM +0200, Pablo Rodríguez <address@hidden>
wrote:
Sorry, Matthias, I uploaded the wrong files. The r ight one is to be
found at http://ousia.justfree.com/prueba-audio.swf.
This WAV file is broken (truncated) indeed.
Or did you really expect 101 Minutes of audio data to be contained
in 1.7 Mb of data? :)
I recorded the file and it only took less than a minute. So the 101
minutes of audio were clearly wrong ;-).
Btw., how did you create this file?
With a Python GStreamer script (bellow). The GStreamer documentation
describes the "Controlled shutdown of live sources in applications"
(http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer-libs/html/GstBaseSrc.html)
and proposes the following solution:
Since GStreamer 0.10.16 an application may send an EOS event to a source
element to make it perform the EOS logic (send EOS event downstream or
post a GST_MESSAGE_SEGMENT_DONE on the bus). This can typically be done
with the gst_element_send_event() function on the element or its parent bin.
The question is that I don't know how how the use the
gst_element_send_event() and probably because of that, the file is
truncated.
def __init__(self):
self.player = gst.Pipeline("player")
self.clock = self.player.get_clock()
self.source = gst.element_factory_make("alsasrc", "alsa-source")
self.encoder = gst.element_factory_make("wavenc", "wavenc")
self.fileout = gst.element_factory_make("filesink", "sink")
self.fileout.set_property("location", self.filename + "-audio.wav")
self.player.add(self.source, self.encoder, self.fileout)
gst.element_link_many(self.source, self.encoder, self.fileout)
bus = self.player.get_bus()
bus.add_signal_watch()
bus.enable_sync_message_emission()
bus.connect('message', self.on_message)
self.playing = False
self.recording_time = self.player.get_last_stream_time()
def on_message(self, bus, message):
t = message.type
if t == gst.MESSAGE_EOS:
self.player.set_state(gst.STATE_NULL)
elif t == gst.MESSAGE_ERROR:
err, debug = message.parse_error()
print "Error: %s" % err, debug
self.player.set_state(gst.STATE_NULL)
def on_key_press_event(self, widget, event):
if (event.keyval == gtk.keysyms.space):
if self.playing == False:
self.playing = True
self.player.set_state(gst.STATE_PLAYING)
elif self.playing == True:
self.player.set_state(gst.STATE_NULL)
Thanks for your help,
Pablo