Page 116 - Applied Statistics with R
P. 116

116                      CHAPTER 7. SIMPLE LINEAR REGRESSION


                                                 2
                                 where    ∼   (0,    ).
                                          
                                 Then we can find the mean and variance of each    .
                                                                               
                                                        E[   ∣    =    ] =    +      
                                                                     
                                                             
                                                                 
                                                                        0
                                                                             1   
                                 and
                                                                            2
                                                          Var[   ∣    =    ] =    .
                                                                
                                                                         
                                                                    
                                 Additionally, the    follow a normal distribution conditioned on the    .
                                                                                                
                                                    
                                                                             2
                                                           ∣    ∼   (   +       ,    )
                                                               
                                                           
                                                                         1   
                                                                    0
                                                                               2
                                 Recall that the pdf of a random variable    ∼   (  ,    ) is given by
                                                                1          1     −     2
                                                         2
                                                    (  ;   ,    ) = √  exp [− (    ) ].
                                                    
                                                               2     2     2     
                                 Then we can write the pdf of each of the    as
                                                                         
                                                               1         1      − (   +       )  2
                                                        2
                                           (   ;    ,    ,    ,    ) = √  exp [− (      0  1     ) ].
                                                        0  1     2       2           
                                                              2    
                                 Given    data points (   ,    ) we can write the likelihood, which is a function of
                                                       
                                                          
                                                               2
                                 the three parameters    ,    , and    . Since the data have been observed, we use
                                                     0
                                                        1
                                 lower case    to denote that these values are no longer random.
                                              
                                                              1          1     −    −        2
                                                    2
                                             (   ,    ,    ) = ∏ √  exp [− (      0  1     ) ]
                                                 1
                                              0
                                                           =1  2     2   2         
                                                                     2
                                 Our goal is to find values of    ,    , and    which maximize this function, which
                                                              1
                                                           0
                                 is a straightforward multivariate calculus problem.
                                 We’ll start by doing a bit of rearranging to make our task easier.
                                                                              
                                                          1              1
                                                  2
                                                                                            2
                                           (   ,    ,    ) = (√  ) exp [−  ∑(   −    −       ) ]
                                               1
                                                                                  
                                            0
                                                                                        1   
                                                                                    0
                                                          2     2       2   2    =1
                                 Then, as is often the case when finding MLEs, for mathematical convenience we
                                 will take the natural logarithm of the likelihood function since log is a monoton-
                                 ically increasing function. Then we will proceed to maximize the log-likelihood,
                                 and the resulting estimates will be the same as if we had not taken the log.
   111   112   113   114   115   116   117   118   119   120   121