New Square Method Abstract: The “new square method” is an improved approach based on the “least square method”. It calculates not only the constants and coefficients but also the variables’ power values in a model in the course of data regression calculations, thus bringing about a simpler and more accurate calculation for non-linear data regression processes. I.
Preface In non-linear data regression calculations, the “least square method” is applied for mathematical substitutions and transformations in a model, but the regression results may not always be correct, for which we have made improvement on the method adopted and named the improved one as “new square method”. II. Principle of New Square Method While investigating
the correlation between variables Figure 1 where a0, a1 and k indicate any real numbers. To establish the fitted equation, the values of a0, a1 and k
need to be determined via subtracting the calculated value Then calculate the quadratic sum of m
Substitute Expression 1 into Expression 2, as shown in Expression 3:
Find the partial
derivatives for a0, a1 and k respectively through function
Through derivation it is found that there is no
analytic solution to this equation set, then computer programs are utilized
to calculate its arithmetic solutions and obtain the solutions for a0, a1 and k
as well as the correlation coefficient Model choose Mechanism research
Method. The method is to study the inner relation during the course. After
supposing the course, set the mathematical variant among the relation of data
for more than two dimensions. To making the mathematical distortion disposal
to the mathematical variant, find the relating variant and objective
function, and use the coefficient of the data regression computing mechanism
model. Mechanism of the
method is suitable: few of data , low of Data accuracy, need mechanism model
reparation the deficiencies. Data Research Method The method is to
the two dimensions data, and to make the two dimensions data as the objective
function and variant. The change Variant x makes the change of y, the change
can be divided into six situations (chart 1- 6). Firstly, linear increasing, with the
increasing of x, the even speed
of y
increasing. Secondly, linear
reduce, with the increasing of x, the even speed of y reduce. Thirdly, non-linear
increasing, with the increasing of x, the acceleration of y increasing. Fourthly,
non-linear increasing, with the increasing of x, the deceleration of y
increasing. Fifthly, non-linear
reduce, with the increasing of x, the acceleration of y reduce. Sixthly, non-linear
reduce, with the increasing of x, the deceleration of y reduce. Suppose the Model
of the six situations are::
In the first
situation, when a0 > 0 , a1 > 0, k = 1 In the second
situation, when a0 > 0, a1 < 0, k = 1 In the third
situation, when a0 > 0 , a1 > 0, k > 1, k < 0 In the fourth
situation, when a0 > 0, a1 > 0 , 0 < k < 1 In the fifth
situation when a0 > 0 , a1 < 0 , 0 < k < 1 In the sixth
situation when a0 > 0, a1 < 0 , k > 1 , k < 0 Through the above
summary, if choose xk, we can select the value of k according to the relation
among a0,a1,a2 and k mentioned from Situation 1 to 6. In Situation 3 and 6,
the curve concave above, it is similar with exponent curve, we can select
exponent form ex. In Situation 4 and 5, the curve protrudes above, it is
similar with logarithm, we can select logarithm form Logx) (logarithm fundus
is e) The problem to be
noticed since selecting parameters. 1. When one variant datum is 0, the datum
can't be used as divisor and get the logarithm, and we can add a number on
the dimension, and make it bigger than zero. 2. When there is a
negative in the datum, the datum can't be used as regression computing,
multiple the dimension datum with a negative to make it bigger than zero. 3. When get the power of some variant, it
can't be too big or small, or the regression computing will intermit,
sometimes the model will enlarge the error of computing |