[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Paparazzi-devel] georeferencing video stream?
From: |
Chris Gough |
Subject: |
Re: [Paparazzi-devel] georeferencing video stream? |
Date: |
Fri, 2 Oct 2009 10:44:59 +1000 |
Austin, are you using IMU to stabilise attitude then assuming the
camera is level, or are you getting attitude measures from the IMU
then mixing them into your orthocorrection process? If it's the
latter, I'm not sure how a stabilised gimbal would help - except they
if it's stabilised directly down, you wouldn't need to orthorectify
much if any (just fix the lens distortion and wear the gimbal error).
I initially thought that knowing position of the camera and the
direction it's pointing would be sufficient to orthorectify, but
looking at your experience it seems that compounding errors make it a
challenge to create sufficiently accurate fiducals from geometry
alone. I looked at the ensomosaic site, and it seems maybe they are
using computer vision to extract features from each image, and (where
they can match features between two or more images) using them as
fiducals for for both orthorectification/stitching and triangulating
points in a surface mesh (impressive!). It seems maybe they are using
IMU+GPS measurements to position the stitched image absolutely, not
put points within it.
I had a quick look at the GRASS orthorectification process; it works
because fiducals are specified for each image prior to
orthorectification. The examples in the documentation show it as a
manual (but scriptable) process. I bothered a GIS analyst (ESRI fan)
who told me that in her experience the normal way to processing
satellite imagery is to manually identify fiducals in each image (road
intersections are her favourite). This is OK if you are combining a
small number of very high resolution images, but impractical for 30
frames per second at 600*480 or whatever (unless of course your
imagery is worth the effort; I want an effortlessly updated GIS as a
consequence of each flight).
The GIMP image stitching plugin is a semi-manual process. It uses some
image processing magic to detect a bunch of reference points, but
suggests you will get even better results if you manually add some
yourself (perfectionists). I don't know if the GIMP plugin makes the
identity and position of the automatically detected features
available, but it's a python plugin, so determined effort could get
them... A quick experiment shows the vision-based fiducals seems to do
a much better job (on some 10M pixel holiday snaps) than the OSAM
geometry based fiducals. No measurements, but a cursory visual
inspection can't detect any seams!.
How about this then:
1. Get timestamped series of images.
2. For each image:
a. Correct for lens distortion, using empirically
calibrated routine for the specific camera.
b. Interpolate telemetry to get a estimated camera
attitude/position vector (at time of image capture).
c. Input the attitude/position vector to an empirically
calibrated routine that yields a vector of instrument
fiducals (a collection of {image coordinates, point
on the ground plane coordinates} )
f. Use the GRASS process and instrument-fiducals
to orthorectify (but not stitch) the image. As I
understand it, if I also stitched here it would be
equivalent to the OSAM technique; is that right?
g. Somehow [*A] calculate new coordinates of
the instrument fiducals in the orthorectified
image, yielding orthorectified instrument
fiducals.
3. For batches of intersecting orthorectified images (based on
orthorectified instrument fiducals):
a. Stitch into a single image with the GIMP plugin,
using auto-detected features. Find out [*A] the
post-stitching image coordinates of the
orthorectified instrument fiducals of each image.
I.e. apply the same topological transformation to
the instrument fiducals as was applied to the
image.
b Use some statistical technique to derive a new set
of combined orthorectified instrument fiducals for
the stitched image (minimum root mean squared
error?) [*B]
c. Load stitched image into GIS, georeferenced by
combined orthorectified instrument fiducals, and
attributed with a summary of the estimated
camera attitude/positions for the images that went
into it.
4. Optionally, use a GIS client to produce "most recent on top" tiles
from the GIS layer populated in step 3, and publish them to a new GIS
layer.
[*A] no idea... maybe after a deeper look at the GIMP plugin.
[*B] basically position the stitched image absolutely, based on
combined telemetry estimates of all the images. The more images that
went into the stitch, the better the attitudinal error correction :)
Comments?
Chris Gough
On Fri, Oct 2, 2009 at 5:18 AM, Austin Jensen
<address@hidden> wrote:
> Todd,
>
> We have analyzed the errors in the altitude estimation, the yaw and position
> (assuming roll and pitch were good). We found that after correcting for yaw
> error, the orthorectification error decreased to 5-20m. After correcting for
> position, the error decreased to below 5m. The altitude correction made no
> significant improvement. Since it is a more difficult problem, we haven't
> looked at roll and pitch yet, but we are working at it. Since we are using
> an IMU, we expect that our biggest contributor to this error will be GPS. If
> there are problems with roll and pitch, it will probably be a bias from the
> sensor or a misalignment between the camera and the IMU. We will see though.
>
> Austin
>
> ------------------------------------------------------------------
> Austin Jensen, Research Engineer
> Utah Water Research Laboratory (UWRL)
> Utah State University, 8200 Old Main Hill
> Logan, UT, 84322-8200, USA
> E: address@hidden
> T: (435)797-3315
>
>
> On Thu, Oct 1, 2009 at 10:10 AM, Todd Sandercock <address@hidden>
> wrote:
>>
>> Hi Austin and all
>> I am guessing that most of the error came from roll and pitch in your
>> study. Do you think this could be rectified by a stabilised camera on board
>> that ideally always faces directly down?
>> Of course there is always the error in yaw but that can be solved in a few
>> different ways
>> Todd
>> ________________________________
>> From: Austin Jensen <address@hidden>
>> To: address@hidden
>> Sent: Thursday, 1 October, 2009 3:33:22 AM
>> Subject: Re: [Paparazzi-devel] georeferencing video stream?
>>
>> Chris,
>>
>> Sounds like your on the right track. The biggest problem you will face
>> will be the accuracy of your orthorectification based on the sensors of the
>> aircraft (especially the IR sensors). We did a study on it and found that
>> the orthorectification error can vary from 5 to 40m depending on your
>> altitude. And that was using an IMU. Here is an example ..
>>
>> http://www.engr.usu.edu/wiki/index.php/Image:OSAMBeforeMan.PNG
>>
>> We are working on ways to improve this by calibrating the aircraft sensors
>> in flight.
>>
>> I suspect that the method used to georeference the images in the
>> presentation you mentioned might use the aircraft sensors to help
>> georeference, but its probably more based on the features in the images. I
>> know of one open source project that stitches images based on features.
>>
>> http://jimatis.sourceforge.net/
>>
>> A different propriatory software called ensomosaic does a very good job at
>> georeferencing the images using position, orientation and position.
>>
>> http://www.ensomosaic.com/
>>
>> Austin
>>
>> ------------------------------------------------------------------
>> Austin Jensen, Research Engineer
>> Utah Water Research Laboratory (UWRL)
>> Utah State University, 8200 Old Main Hill
>> Logan, UT, 84322-8200, USA
>> E: address@hidden
>> T: (435)797-3315
>>
>>
>> On Wed, Sep 30, 2009 at 1:14 AM, Todd Sandercock
>> <address@hidden> wrote:
>>>
>>> I am working on the same ideas as you.
>>> Paparazzi is extremely suitable for this because it is soooooo easy get
>>> your hands on any data that you want by using the Ivy bus.
>>> Skipping the image processing part and using location from paparazzi
>>> seems to be a more feasible initial solution though. That is if position on
>>> the ground is something important to you....
>>> Finding a GIS client suitable for the job i have found extremely
>>> difficult. Looking around in the image processing area of the OSAM wiki is
>>> one successful implementation though.
>>> Todd
>>> ________________________________
>>> From: Chris Gough <address@hidden>
>>> To: address@hidden
>>> Sent: Wednesday, 30 September, 2009 9:11:57 AM
>>> Subject: [Paparazzi-devel] georeferencing video stream?
>>>
>>> Sorry if this is be a bit off topic, but...
>>>
>>> The Thales SpyArrow presentation linked to from the wiki home page
>>> (http://newton.ee.auth.gr/aerial_space/docs/CS_4.pdf) refers to a
>>> system where the live video stream is georeferenced, and shows images
>>> of what appears to be a stitched image. I'm interested in how to do
>>> this, and was hoping somebody might give me some hints.
>>>
>>> I had imagined post-processing (on the ground) combination of video
>>> and telemetry:
>>> 1. Convert the video stream into a "timestamped sequence of images"
>>> as they arrive,
>>> using the native features of a video capture card and operating
>>> system.
>>> 2. Interpolate the telemetry stream to estimate {lat, long, altitude,
>>> pitch, roll, yaw}
>>> at the exact time of each image.
>>> 3. Orthorectify each image and load them into a temp GIS raster layer
>>> in a database
>>> 4. Use elevation model and some geometry to infer the locaction of
>>> some points in
>>> the streatched image.
>>> [4b. use fancy pattern recognition and/or umpa-lumpas to identify
>>> points...]
>>> 5. Process the images (stitch, filter, etc) and post fixed
>>> rectangular tiles to another
>>> GIS layer.
>>> 6. Combine tiles (perhaps allong with other spatial data) in a GIS
>>> client to produce
>>> images/maps as required (periodically refreshed).
>>>
>>> am I on the right track? Is there a working open source solution already?
>>>
>>> Chris Gough
>>>
>>>
>>> _______________________________________________
>>> Paparazzi-devel mailing list
>>> address@hidden
>>> http://lists.nongnu.org/mailman/listinfo/paparazzi-devel
>>>
>>> ________________________________
>>> Get more done like never before with Yahoo!7 Mail. Learn more.
>>> _______________________________________________
>>> Paparazzi-devel mailing list
>>> address@hidden
>>> http://lists.nongnu.org/mailman/listinfo/paparazzi-devel
>>>
>>
>>
>> ________________________________
>> Get more done like never before with Yahoo!7 Mail. Learn more.
>> _______________________________________________
>> Paparazzi-devel mailing list
>> address@hidden
>> http://lists.nongnu.org/mailman/listinfo/paparazzi-devel
>>
>
>
> _______________________________________________
> Paparazzi-devel mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/paparazzi-devel
>
>
- Re: [Paparazzi-devel] georeferencing video stream?, Todd Sandercock, 2009/10/01
- Re: [Paparazzi-devel] georeferencing video stream?, Austin Jensen, 2009/10/01
- Re: [Paparazzi-devel] georeferencing video stream?,
Chris Gough <=
- Re: [Paparazzi-devel] georeferencing video stream?, Austin Jensen, 2009/10/02
- [Paparazzi-devel] Photogrammetry (was georeferencing video stream), Steve Joyce, 2009/10/08
- Re: [Paparazzi-devel] Photogrammetry (was georeferencing video stream), Chris Gough, 2009/10/08
- Re: [Paparazzi-devel] Photogrammetry (was georeferencing video stream), Austin Jensen, 2009/10/08
- Re: [Paparazzi-devel] Photogrammetry (was georeferencing video stream), Steve Joyce, 2009/10/09
- Re: [Paparazzi-devel] Photogrammetry (was georeferencing video stream), Todd Sandercock, 2009/10/09