octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: svds test failure


From: Daniel J Sebald
Subject: Re: svds test failure
Date: Wed, 01 Aug 2012 12:37:40 -0500
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.24) Gecko/20111108 Fedora/3.1.16-1.fc14 Thunderbird/3.1.16

On 08/01/2012 11:54 AM, Ed Meyer wrote:


On Tue, Jul 31, 2012 at 1:41 PM, Daniel J Sebald <address@hidden
<mailto:address@hidden>> wrote:

    On 07/31/2012 03:23 PM, Ed Meyer wrote:



        On Tue, Jul 31, 2012 at 12:31 PM, Daniel J Sebald
        <address@hidden <mailto:address@hidden>
        <mailto:address@hidden
        <mailto:address@hidden>__>> wrote:

             On 07/31/2012 02:22 PM, Ed Meyer wrote:

                 As I pointed out in the "make check" thread the problem
        is that we
                 should not be
                 using absolute tolerances because that does not account
        for the
                 size of
                 the matrix
                 elements.  What we should do is to estimate if the
        result lies
                 in the
                 interval we would
                 get with roundoff perturbations in the matrix.  If it does,
                 that's as
                 good as can be expected.
                 Eispack used to do something like that with their
        "performance
                 index"
                 but I think you can
                 get better estimates.


             Give us an idea of what code you are suggesting Ed.
          Something like

             eps*size(s,1)*size(s,2)

             ?  Something concerning the rank of the input matrix?

             Dan

             PS: Did you mean to CC to the maintainers list?


        oops - I did mean to CC
        I'm attaching a page from a manual I wrote describing how I judge
        results.  It's more
        involved than the old eispack tests and maybe more work than we
        want to do.
        Much simpler would be to just use the norm of the matrix times some
        factor times
        machine epsilon instead of the absolute tolerances.


    We're talking the linear case here, so what does equation (7) become
    in that case?  A'(lambda) should be simply a constant matrix?  Does
    that then turn out to be Frobenius norm or something?  So

for the linear case A(lambda) = B - lambda*I where B is the matrix who's
svd we want

    tol = 100*eps*norm(A, 'fro');

    and


    %!testif HAVE_ARPACK
    %! s = svds (speye (10));
    %! assert (s, ones (6, 1), tol);

    Something like that?

    And what of the degenerative case of


    %!testif HAVE_ARPACK
    %! [u2,s2,v2,flag] = svds (zeros (10), k);
    %! assert (u2, eye (10, k));
    %! assert (s2, zeros (k));
    %! assert (v2, eye (10, 7));

    where the norm is zero?  Just leave as is, because we aren't really
    testing the performance in this case so much as the library's
    ability to handle the degenerate case?

    Dan

to handle the case of a matrix with zero (or small) elements we could use

   tol = 100*eps*min (1.0, norm(A,"fro"))

You meant "max", correct?


The results from ARPACK should not depend on the starting vector; if
they do there is probably
something wrong with the installation.

I agree. It is the "something wrong with the installation" part we are trying to test here. The accuracy of the ARPACK algorithm is the user's concern and of course should be evaluated by the user when working with an application.


 Which brings up another point -
computing an svd by solving
a (larger) eigenvalue problem is probably not the best way.  There are
sparse svd solvers out
there that might be more efficient and robust.  I'll try a few.

As for test cases I put together a case to create randomly sized and
shaped matrices which
are very ill-conditioned to test the algorithm;  I'll use it to compare
svd solvers.  I was surprised
that Arnoldi did not converge for some test cases.

That's interesting. I looked up Arnoldi algorithm. It is meant as a way to deal with unstable Krylov/power iteration, so perhaps it isn't surprising it has issues as well. A couple of researchers proposed implicitly restarting the Arnoldi algorithm (IRAM) if it runs too long. Supposedly this technique is in ARPACK.

Dan


reply via email to

[Prev in Thread] Current Thread [Next in Thread]