Handling Quasi-Nonconvergence in Logistic Regression: Technical Details and an Applied Example
by Jeffrey M. Miller and M. David Miller.
Nonconvergence is a concern for any iterative data analysis process. However, there are instances in which
convergence will be obtained for the overall solution but not for a specific estimate. For most software packages, it
is not easy to notice this problem unless the researcher has a priori knowledge of reasonable solutions. Hence, faulty
inferences can be disguised by a presumably correct estimation procedure known as “quasi-nonconvergence”.
This type of nonconvergence occurs in logistic regression models when the data are quasi-completely separated.
This is to say that prediction is completely or nearly completely perfect. Firth (1993) presented a penalized
likelihood correction that was then extended by Heinze and Ploner (2003) to solve the quasi-nonconvergence problem.
This procedure was applied to educational research data to demonstrate its success in eliminating the problem.
Quasi-Nonconvergence, Nonconvergence, Logistic Regression
Jeffrey M. Miller, email@example.com
M. David Miller, firstname.lastname@example.org
Graf, R. G., email@example.com
READING THE ARTICLE: You can read the article in
portable document (.pdf) format (288509 bytes.)
NOTE: The content of this article is the intellectual property of the authors, who retains all rights to future publication.
This page has been accessed 2033 times since OCTOBER 13, 2011.
Return to the Home Page.