MATLAB Answers

Compare Betas from two different GLMS

2 views (last 30 days)
Peter P
Peter P on 2 Dec 2019
Commented: Jeff Miller on 10 Dec 2019
I am trying to compare the betas from two different glms. For each model I have the following output: (This is the output for only 1 model).
I want to compare the fixed effects coefficients from Model 1 with those from Model 2, i.e. "Is Evidence_Control_11 (Model1) significantly different from Evidence_Control_11 (Model2)? The models differ with regards to the number of observations. However, the formula is equivalent for both models.
I would be grateful for guidance.


Sign in to comment.

Answers (2)

Raunak Gupta
Raunak Gupta on 6 Dec 2019
The Model Fit Statistics contains several parameters that are good enough to compare between two model’s performance for overall fitting. For specific variables like the mentioned Fixed effect ‘Evidence_Control_11’ , the Estimate value will be the beta you may be looking for but that doesn’t the goodness of the fit. So, I suggest comparing the models based on the pValue that give statistical robustness. anova can also be good measure for comparing models for a single predictor.
Hope this helps.

  1 Comment

Peter P
Peter P on 6 Dec 2019
Thanks but I am not looking to compare the model fit but rather whether the specific parameters, i.e. here evidence_control_11 vs evidence_stress_11 are statistically different?
Statistically you would usually do a t-test but I do not have the necessary information from this output to do so.
Do you have any suggestions for this?

Sign in to comment.

Jeff Miller
Jeff Miller on 7 Dec 2019
You might compare confidence intervals. For example, the estimated value for evidence control is .62033. Compute an approximate 95% confidence interval (CI) by going 2*SEs above and below that: i.e., .62033 +/- 2*0.021935. Do the same computation for evidence stress and compare the resulting CIs. If one interval is clearly above the other, you have your answer as to which estimate is larger. If not, check this paper: Wolfe, R and Hanley, J (2002). Canadian Medical Association Journal


Peter P
Peter P on 10 Dec 2019
Sadly this also does not provide me with a p-value and just seems to be an extension of comparing C.I.s - if you have any advice on how to proceed I would be very grateful.
Jeff Miller
Jeff Miller on 10 Dec 2019
Unfortunately, my advice is that you'll probably need to go through the problem in detail with a statistician. I am pretty sure that getting a p value will depend on many details of the models that you are fitting and the data to which you are fitting them. For example, is one model nested in the other? Are the models being fit to the same data or different data? Even if there is nothing else, there is probably some bootstrapping approach that could be tailored to your situation, but that could be extremely tricky to set up correctly.
Sorry I don't have a better suggestion to offer.

Sign in to comment.

Sign in to answer this question.