What's Up? I'm a Research Associate Professor in the Michigan Program in Survey Methodology, located within the Survey Research Center at the Institute for Social Research on the University of Michigan-Ann Arbor campus. I also provide statistical consultation and help to develop research grant proposals as part of the Consulting for Statistics, Computing, and Analytics Research (CSCAR) team. I have a PhD in Survey Methodology from MPSM, and both a Masters Degree in Applied Statistics and a Bachelors Degree in Statistics from the U of M Department of Statistics. Interested parties can check out my CV here or my NIH Bibliography here. You can also drop me an email!
Click here to download the data sets and syntax for the CSCAR workshop Intermediate Topics in SPSS: Advanced Statistical Models.
Linear Mixed Models: A Practical Guide using Statistical Software
I have written a book entitled Linear Mixed Models: A Practical Guide Using Statistical Software with two colleagues here at U of M (Kathy Welch and Andrzej Galecki). The book is now in its second edition, which was first available in July of 2014. Click on the title to access electronic versions of the data files and syntax discussed in the book. The book was published by Chapman Hall/CRC Press in Boca Raton, Florida. You can order copies from the publisher or online retailers (e.g., Amazon).
Applied Survey Data Analysis (ASDA)
I have also written a book, which is joint work with my colleagues Steve Heeringa and Pat Berglund at ISR, entitled Applied Survey Data Analysis. This book aims to provide researchers with guidance on correct application of modern techniques for design-based analysis of complex sample survey data, and is now available from various online retailers.
Improving Surveys with Paradata: Analytic Use of Process Information
I have authored or co-authored a couple of chapters in this new edited volume focusing on the various uses of survey paradata, or survey process information (edited by Frauke Kreuter):
1. West, B.T. (2013). The Effects of Error in Paradata on Weighting Class Adjustments: A Simulation Study. Chapter 15 in Improving Surveys with Paradata: Analytic Use of Process Information. Wiley Publishing.
2. West, B.T. and Sinibaldi, J. (2013). The Quality of Paradata: A Literature Review. Chapter 14 in Improving Surveys with Paradata: Analytic Use of Process Information. Wiley Publishing.
This was an exciting project to work on, and provides survey researchers with an up-to-date reference on the value of paradata for survey research.
The SAGE Handbook of Multilevel Modeling
This new handbook represents a modern and comprehensive overview of current research and practice related to multilevel modeling by leading statisticians in the area, with a focus on practical applications and considerations. My colleague Andrzej Galecki and I contributed a chapter on software for multilevel modeling (Chapter 26). I would highly recommend this resource if you use multilevel models frequently in your work!
The SAGE Handbook of Regression Analysis and Causal Inference
This new handbook presents modern views on both the art and science of regression modeling, and provides an up-to-date reference on the newest approaches to causal inference. In Chapter 11, the three authors of ASDA present an overview of modern approaches to fitting regression models to data from complex sample surveys.
Selected publications are listed below.
1. West, B.T., Beer, L., Gremel, W., Weiser, J., Johnson, C., Garg, S., and Skarbinski, J. (2015). Weighted Multilevel Models: A Case Study. American Journal of Public Health, 105(11), 2214-2215.
2. West, B.T., Ghimire, D., and Axinn, W.G. (2015). Evaluating a Modular Design Approach to Collecting Survey Data using Text Messages. Survey Research Methods, 9(2), 111-123.
3. West, B.T., Wagner, J., Gu, H. and Hubbard, F. (2015). The Utility of Alternative Commercial Data Sources for Survey Operations and Estimation: Evidence from the National Survey of Family Growth. Journal of Survey Statistics and Methodology, 3(2), 240-264.
4. Elliott, M.R. and West, B.T. (2015; Authors Alphabetical). “Clustering by Interviewer”: A Source of Variance That Is Unaccounted for in Single-Stage Health Surveys. American Journal of Epidemiology, 182(2), 118-126.
5. West, B.T. and Kreuter, F. (2015). A Practical Technique for Improving the Accuracy of Interviewer Observations of Respondent Characteristics. Field Methods, 27(2), 144-162.
6. West, B.T., Welch, K.B. and Galecki, A.T. (with Contributions from Brenda W. Gillespie) (2014). Linear Mixed Models: A Practical Guide using Statistical Software, Second Edition. Chapman Hall / CRC Press: Boca Raton, FL.
7. Krueger, B.S. and West, B.T. (2014, Authors Alphabetical). Assessing the Potential of Paradata and Other Auxiliary Information for Nonresponse Adjustments. Public Opinion Quarterly, 78(4), 795-831.
8. Raykov, T., West, B.T., and Traynor, A. (2014). Evaluation of Coefficient Alpha for Multiple Component Measuring Instruments in Complex Sample Designs. Structural Equation Modeling. DOI: 10.1080/10705511.2014.936081.
9. Sakshaug, J. and West, B.T. (2014). Important Considerations when Analyzing Health Survey Data Collected using a Complex Sample Design. American Journal of Public Health, 104(1), 15-16.
10. West, B.T. and Peytcheva, E. (2014). Can Interviewer Behaviors During ACASI Affect Data Quality? Survey Practice, 5(7).
11. West, B.T., and Elliott, M.R. (2014). Frequentist and Bayesian Approaches for Comparing Interviewer Variance Components in Two Groups of Survey Interviewers. Survey Methodology, 40(2), 163-188.
12. West, B.T. and Little, R.J.A. (2013). Nonresponse Adjustment of Survey Estimates based on Auxiliary Variables Subject to Error. Journal of the Royal Statistical Society – Series C (Applied Statistics), 62(2), 213-231.
13. West, B.T. and Groves, R.M. (2013). The PAIP Score: A Propensity-Adjusted Interviewer Performance Indicator. Public Opinion Quarterly, 77(1), 352-374.
14. West, B.T. and Kreuter, F. (2013). Factors Affecting the Accuracy of Interviewer Observations: Evidence from the National Survey of Family Growth (NSFG). Public Opinion Quarterly, 77(2), 522-548.
15. West, B.T., Kreuter, F., and Jaenichen, U. (2013). “Interviewer” Effects in Face-to-face Surveys: A Function of Sampling, Measurement Error or Nonresponse? Journal of Official Statistics, 29(2), 277-297.
16. West, B.T. (2013). An Examination of the Quality and Utility of Interviewer Observations in the National Survey of Family Growth. Journal of the Royal Statistical Society, Series A (General), 176(1), 211-225.
17. West, B.T. and Sinibaldi, J. (2013). The Quality of Paradata: A Literature Review. Chapter 14 in Improving Surveys with Paradata: Analytic Uses of Process Information. Wiley Publishing.
18. West, B.T. (2013). The Effects of Error in Paradata on Weighting Class Adjustments: A Simulation Study. Chapter 15 in Improving Surveys with Paradata: Analytic Uses of Process Information. Wiley Publishing.
19. Wagner, J., West, B.T., Kirgis, N., Lepkowski, J.M., Axinn, W.G., and Kruger-Ndiaye, S. (2012). Use of Paradata in a Responsive Design Framework to Manage a Field Data Collection. Journal of Official Statistics, 28(4), 477-499.
20. West, B.T. and McCabe, S.E. (2012). Incorporating Complex Sample Design Effects When Only Final Survey Weights are Available. The Stata Journal, 12(4), 718-725.
21. West, B.T. and Galecki, A.T. (2011). An Overview of Current Software Procedures for Fitting Linear Mixed Models. The American Statistician, 65(4), 274-282.
22. Heeringa, S.G., West, B.T., and Berglund, P.A. (2010). Applied Survey Data Analysis. Chapman Hall / CRC Press: Boca Raton, FL.
23. West, B.T. and Olson, K. (2010). How Much of Interviewer Variance is Really Nonresponse Error Variance? Public Opinion Quarterly, 74(5), 1004-1026.
24. McCabe, S.E., Hughes, T.L., Bostwick, W.B., West, B.T., and Boyd, C.J. (2010). Discrimination and Substance Use Disorders among Lesbian, Gay and Bisexual Adults in the United States. American Journal of Public Health, 100, 1946-1952.
25. West, B.T. and Lamsal, M. (2008). A New Application of Linear Modeling in the Prediction of College Football Bowl Outcomes and the Development of Team Ratings. Journal of Quantitative Analysis in Sports, 4(3), Article 3.
26. West, B.T. (2006). A Simple and Flexible Rating Method for Predicting Success in the NCAA Basketball Tournament. Journal of Quantitative Analysis in Sports, 2(3), Article 3.
For a complete list of my publications, please click here or here.
I teach courses, workshops, and seminars for the MPSM, CSCAR, ISR, various other departments around campus, and statistics.com. These include:
-SurvMeth 612: Applied Sampling (MPSM)
-SurvMeth 613: Analysis of Complex Sample Survey Data (MPSM)
-SurvMeth 614: Analysis of Complex Sample Survey Data (ISR Summer Program)
-SurvMeth 618: Inference for Complex Surveys (MPSM)
-SurvMeth 672/673: Survey Practicum (MPSM)
-SurvMeth 720/721: Total Survey Error (MPSM)
-SurvMeth 746: Advanced Statistical Modeling (MPSM)
-Issues in Analysis of Complex Sample Survey Data (CSCAR)
-Introduction to Stata (CSCAR)
-Applications of HLM (CSCAR)
-Introduction to SPSS (CSCAR)
-Intermediate Topics in SPSS (CSCAR)
-Logistic Regression and Related Techniques (CSCAR)
-Statistical Analysis with R (CSCAR)
-Statistical Analysis with Missing Data (CSCAR)
-Mixed and Hierarchical Linear Models (statistics.com)
-Analysis of Survey Data from Complex Sample Designs (statistics.com)
-Biostatistics for Grant Development (Radiation Oncology @ Medical School)
-Introduction to SAS for Financial Engineers
-Nursing 598: Statistical Analysis with SPSS (U of M Flint)
For more information on CSCAR workshops, please visit here.
I strongly believe that statistical modeling can be used to predict success in the NCAA Division I Men's Basketball Tournament. I'm interested in the development of regression models that can be used to predict success, and in the paper above, I present a simple and flexible rating method for predicting success in the NCAA basketball tournament. The paper utilizes a method based on ordinal logistic regression and expectation for prediction. I believe that the RPI is a numerically flawed rating system that receives an unfair amount of weight in selecting and seeding teams for the tournament, and I have shown over the last five years that my models are comparable to or better than the RPI in terms of predicting success in the tournament.
If numerical ratings like the RPI are going to be considered in seeding teams selected for the tournament, the selection committee should focus on the ratings that do the best job of actually predicting success in the tournament, or pre-tournament ratings that correlate very well with actual success. The "best" ratings can be used to identify teams that are likely to do well in the tournament (and thus teams that are most eligible to compete for the national championship). I collect data on RPI ratings, BPI ratings, Jeff Sagarin's computer ratings, and the predictors of success that I consider in my models, and then calculate predicted success in the tournament (which can be translated into a rating) based on my models. You can view the final 2016 predictions and results, in addition to results from previous years, here.
In the 2016 tournament, my predictions had a correlation with actual success (0.603) that was higher than the pre-tournament RPI ratings (0.514), BPI ratings (0.502) and Sagarin ratings (0.521). This has been the case for eight of the past nine years (at least in temrs of the RPI and the Sagarin Ratings). Feedback and comments are more than welcome!
I'm also interested in the possibility that statistical modeling can be used to predict the outcomes of college football bowl games, and in this paper published in the Journal of Quantitative Analysis in Sports, my colleague Madhur Lamsal and I consider a straightforward application of statistical modeling in determining whether team-level variables were able to predict the actual bowl game outcomes in the 2007-2008 bowl season. I also consider applications of the predictions in the development of ratings for college football teams, based on a round-robin playoff scenario.
Results dating back to 2008 can be found below.
2008-2009 Bowls: Predictions and Results (58.8% accuracy)
2009-2010 Bowls: Predictions, Results and Ratings (55.9% accuracy)
2010-2011 Bowls: Predictions and Results (62.9% accuracy)
2011-2012 Bowls: Predictions and Results (62.9% accuracy)
2012-2013 Bowls: Predictions and Results (77.1% accuracy)
Articles referencing the method have appeared in the New York Times, the Ann Arbor News, and the Kansas City Star.
Constructive comments and feedback are more than welcome. Please keep in mind that I do all of this for a hobby, for fun. I do not get paid by anyone to produce these ratings, and I do not have the time to look at every possible predictor of success! I'm always open to advice about data resources where additional (and more informative) team-level statistics can be found. All of these models are certainly in their infancy, and some of the predictions may definitely look odd (of course I don't truly believe that Missouri was the third-best football team in the nation in 2008...I was purely reporting predictions based on my very young and under-developed model). I simply ask that people read in to the general methods that I've proposed before making personal attacks of any kind. Thanks!
Check out my music page!
The University of Michigan Circle K
The Detroit Partnership
The Presbyterian Church
I'm a member of the First Presbyterian Church of Ann Arbor, where I was involved with the Worship Committee for the past five years. I have also been a Deacon for Chapel 27 here in Ann Arbor, and I was co-Moderator of the Board of Deacons in 2011. Click here to find out more about the Presbyterian Church of the USA.
Click here to see a picture of me and my wife Laura! =)
U of M Billiards
Bananas: IM Campus Champs!
A link to this page when it was a Birthday Present from my friends James DeVaney and Matt Comstock.
Goin' To Work!
This page is constantly under construction, so visit again soon!
Last modified 5/24/16 by Brady T. West