help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

regression confidence intervals


From: CdeMills
Subject: regression confidence intervals
Date: Mon, 7 Feb 2011 23:50:52 -0800 (PST)

Hello,

my math courses are long away ... could someone tell me which of these two
approaches is the right one? An experiment produced n pairs of points (xi,
yi) for which a first order model is postulated
yi = [1 xi] [b; a]

The parameters are found by letting
A = [ ones(size(x)) x]; B = y; th  = A\B; [b, a] = deal(th)

A covariance matrix can be found by
S2y= sumsq(B-A*th)/(length(y)-2); %# a posteriori variance 
Cth = inv(A'*A)*S2y;

>From here, I want to get, for a given, errorless x0, what is an 90%
confidence interval for y0.
First way: from error propagation technique, I can compute the variance  of
y0 as
Sy0 = [1 x0]Cth[1; x0]; %# notice that this depends from x0
but then I start from S2y, which has n-2 degrees of freedom, to [db da]
which is bivariate normal, to Sy0 which is univariate normal. Can I assume
that (y-y0)^2/Sy0 is chi-square with one d.f. ?

Second way: [db da] is a bivariate normal, so, applying the concept of
Mahalanobis distance,
[db da] inv(Cth) [db; da] is chi-square with 2 d.f,  confidence intervals
are ellipse. From there, I compute dy = db + x0 da; and I choose the pair
[db, da] belonging to the ellipse which gives rise to an extrema in dy. I
then conclude that the 90% interval on (db, da), the ellipse, is mapped to
[y0-ymin, y0+ymax].

What is the correct approach ? Any pointer to litterature ?

Regards

Pascal

-- 
View this message in context: 
http://octave.1599824.n4.nabble.com/regression-confidence-intervals-tp3274454p3274454.html
Sent from the Octave - General mailing list archive at Nabble.com.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]