Sorry if this is be a bit off topic, but...
The Thales SpyArrow presentation linked to from the wiki home page
(
http://newton.ee.auth.gr/aerial_space/docs/CS_4.pdf) refers to a
system where the live video stream is georeferenced, and shows images
of what appears to be a stitched image. I'm interested in how to do
this, and was hoping somebody might give me some hints.
I had imagined post-processing (on the ground) combination of video
and telemetry:
1.
Convert the video stream into a "timestamped sequence of images"
as they arrive,
using the native features of a video capture card and operating system.
2. Interpolate the telemetry stream to estimate {lat, long, altitude,
pitch, roll, yaw}
at the exact time of each image.
3. Orthorectify each image and load them into a temp GIS raster layer
in a database
4. Use elevation model and some geometry to infer the locaction of
some points in
the streatched image.
[4b. use fancy pattern recognition and/or umpa-lumpas to identify points...]
5. Process the images (stitch, filter, etc) and post fixed
rectangular tiles to another
GIS layer.
6. Combine tiles (perhaps allong with other spatial data) in a GIS
client to produce
images/maps as required (periodically refreshed).
am I on the right track? Is there a working open source
solution already?
Chris Gough
_______________________________________________
Paparazzi-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/paparazzi-devel