Brady T. West What The...
What's Up? I'm a Research Assistant Professor in the Michigan Program in Survey Methodology, located within the Survey Research Center at the Institute for Social Research on the University of Michigan-Ann Arbor campus. I also provide statistical consultation and help to develop research grant proposals at the Center for Statistical Consultation and Research. I have a PhD in Survey Methodology from MPSM, and both a Masters Degree in Applied Statistics and a Bachelors Degree in Statistics from the U of M Department of Statistics. Interested parties can check out my CV here. You can also drop me an email!

Current Courses
Click here to download the data sets and syntax for the CSCAR workshop Issues in Analysis of Complex Sample Survey Data.

Linear Mixed Models: A Practical Guide using Statistical Software
I have written a book entitled Linear Mixed Models: A Practical Guide Using Statistical Software with two colleagues here at U of M (Kathy Welch and Andrzej Galecki). The book is now in its second edition, which was first available in July of 2014. Click on the title to access electronic versions of the data files and syntax discussed in the book. The book was published by Chapman Hall/CRC Press in Boca Raton, Florida. You can order copies from the publisher or online retailers (e.g., Amazon). Linear Mixed Models: A Practical Guide using Statistical Software

Applied Survey Data Analysis (ASDA)
I have also written a book, which is joint work with my colleagues Steve Heeringa and Pat Berglund at ISR, entitled Applied Survey Data Analysis. This book aims to provide researchers with guidance on correct application of modern techniques for design-based analysis of complex sample survey data, and is now available from various online retailers. Applied Survey Data Analysis

Improving Surveys with Paradata: Analytic Use of Process Information
I have authored or co-authored a couple of chapters in this new edited volume focusing on the various uses of survey paradata, or survey process information (edited by Frauke Kreuter):
1. West, B.T. (2013). The Effects of Error in Paradata on Weighting Class Adjustments: A Simulation Study. Chapter 15 in Improving Surveys with Paradata: Analytic Use of Process Information. Wiley Publishing.
2. West, B.T. and Sinibaldi, J. (2013). The Quality of Paradata: A Literature Review. Chapter 14 in Improving Surveys with Paradata: Analytic Use of Process Information. Wiley Publishing.
This was an exciting project to work on, and provides survey researchers with an up-to-date reference on the value of paradata for survey research.

The SAGE Handbook of Multilevel Modeling
This new handbook represents a modern and comprehensive overview of current research and practice related to multilevel modeling by leading statisticians in the area, with a focus on practical applications and considerations. My colleague Andrzej Galecki and I contributed a chapter on software for multilevel modeling (Chapter 26). I would highly recommend this resource if you use multilevel models frequently in your work!

The SAGE Handbook of Regression Analysis and Causal Inference
This new handbook presents modern views on both the art and science of regression modeling, and provides an up-to-date reference on the newest approaches to causal inference. In Chapter 11, the three authors of ASDA present an overview of modern approaches to fitting regression models to data from complex sample surveys.

Academic Publications
Selected publications are listed below.
1. Krueger, B.S. and West, B.T. (In Press; Authors Alphabetical). Assessing the potential of paradata and other auxiliary information for nonresponse adjustments. Public Opinion Quarterly.
2. Raykov, T., West, B.T., and Traynor, A. (In Press). Evaluation of coefficient alpha for multiple component measuring instruments in complex sample designs. Structural Equation Modeling.
3. Raykov, T. and West, B.T. (In Press). On Enhancing Plausibility of the Missing at Random Assumption in Incomplete Data Analyses via Evaluation of Response-Auxiliary Variable Correlations. Structural Equation Modeling.
4. West, B.T., and Elliott, M.R. (In Press). Frequentist and Bayesian Approaches for Comparing Interviewer Variance Components in Two Groups of Survey Interviewers. Survey Methodology.
5. West, B.T. and Kreuter, F. (In Press). A practical technique for improving the accuracy of interviewer observations of respondent characteristics. Field Methods.
6. West, B.T., Kreuter, F., and Trappmann, M. (2014). Is the collection of interviewer observations worthwhile in an economic panel survey? New evidence from the German Labor Market and Social Security (PASS) study. Journal of Survey Statistics and Methodology, 2(2), 159-181.
7. West, B.T., Kreuter, F., and Jaenichen, U. (2013). Interviewer Effects in Face-to-face Surveys: A Function of Sampling, Measurement Error or Nonresponse? Journal of Official Statistics, 29(2), 277-297.
8. West, B.T. and Kreuter, F. (2013). Factors Affecting the Accuracy of Interviewer Observations: Evidence from the National Survey of Family Growth (NSFG). Public Opinion Quarterly, 77(2), 522-548.
9. West, B.T. and Groves, R.M. (2013). The PAIP Score: A Propensity-Adjusted Interviewer Performance Indicator. Public Opinion Quarterly, 77(1), 352-374.
10. West, B.T. and Little, R.J.A. (2013). Nonresponse Adjustment of Survey Estimates Based on Auxiliary Variables Subject to Error. Journal of the Royal Statistical Society, Series C (Applied Statistics), 62(2), 213-231.
11. West, B.T. (2013). An Examination of the Quality and Utility of Interviewer Observations in the National Survey of Family Growth. Journal of the Royal Statistical Society, Series A, 176(1), 211-225.
12. Wagner, J., West, B.T., Kirgis, N., Lepkowski, J.M., Axinn, W.G., and Kruger-Ndiaye, S. (2012). Use of Paradata in a Responsive Design Framework to Manage a Field Data Collection. Journal of Official Statistics, 28(4), 477-499.
13. West, B.T. and McCabe, S.E. (2012). Incorporating Complex Sample Design Effects When Only Final Survey Weights are Available. The Stata Journal, 12(4), 718-725.
14. Groves, R.M., Presser, S., Tourangeau, R., West, B.T., Couper, M.P., Singer, E., and Toppe, C. (2012). Support for the Survey Sponsor and Nonresponse Bias. Public Opinion Quarterly, 76(3), 512-524.
15. West, B.T. and Galecki, A.T. (2011). An Overview of Current Software Procedures for Fitting Linear Mixed Models. The American Statistician, 65(4), 274-282.
16. West, B.T. (2011). Paradata in Survey Research: Examples, Utility, Quality, and Future Directions. Survey Practice, August: www.surveypractice.org.
17. West, B.T. and Kreuter, F. (2011). Observational Strategies Associated with Increased Accuracy of Interviewer Observations: Evidence from the National Survey of Family Growth. In JSM Proceedings, Survey Research Methods Section. Alexandria, VA: American Statistical Association, pp. 5646-5658.
18. West, B.T. and Olson, K. (2010). How much of interviewer variance is really nonresponse error variance? Public Opinion Quarterly, 74(5), 1004-1026.
19. McCabe, S.E., Hughes, T.L., Bostwick, W.B., West, B.T., and Boyd, C.J. (2010). Discrimination and Substance Use Disorders among Lesbian, Gay and Bisexual Adults in the United States. American Journal of Public Health, 100, 1946-1952.
20. West, B.T. (2009). Analyzing Longitudinal Data with the Linear Mixed Models Procedure in SPSS. Evaluation and the Health Professions, 32(3), 207-228.
21. McCabe, S.E., Hughes, T.L., Bostwick, W.B., West, B.T., and Boyd, C.J. (2009). Sexual Orientation, Substance Use Behaviors, and Substance Use Disorders in the United States. Addiction, 104, 1333-1345.
22. West, B.T. and Lamsal, M. A New Application of Linear Modeling in the Prediction of College Football Bowl Outcomes and the Development of Team Ratings. Journal of Quantitative Analysis in Sports. Vol. 4 (2008), Issue 3, Article 3.
23. West, B.T., Berglund, P., and Heeringa, S.G. (2008). A Closer Examination of Subpopulation Analysis of Complex Sample Survey Data. The Stata Journal, 8(4), 520-531.
24. West, B.T. A Simple and Flexible Rating Method for Predicting Success in the NCAA Basketball Tournament. Journal of Quantitative Analysis in Sports: Vol. 2: No. 3, Article 3, 2006.
For a complete list of my publications, please click here.

Teaching
I teach courses, workshops, and seminars for the MPSM, CSCAR, ISR, various other departments around campus, and statistics.com. These include:
-SurvMeth 612: Applied Sampling (MPSM)
-SurvMeth 613: Analysis of Complex Sample Survey Data (MPSM)
-SurvMeth 614: Analysis of Complex Sample Survey Data (ISR Summer Program)
-SurvMeth 618: Inference for Complex Surveys (MPSM)
-SurvMeth 672/673: Survey Practicum (MPSM)
-SurvMeth 720/721: Total Survey Error (MPSM)
-SurvMeth 746: Advanced Statistical Modeling (MPSM)
-Issues in Analysis of Complex Sample Survey Data (CSCAR)
-Introduction to Stata (CSCAR)
-Applications of HLM (CSCAR)
-Introduction to SPSS (CSCAR)
-Intermediate Topics in SPSS (CSCAR)
-Logistic Regression and Related Techniques (CSCAR)
-Statistical Analysis with R (CSCAR)
-Statistical Analysis with Missing Data (CSCAR)
-Mixed and Hierarchical Linear Models (statistics.com)
-Analysis of Survey Data from Complex Sample Designs (statistics.com)
-Biostatistics for Grant Development (Radiation Oncology @ Medical School)
-Introduction to SAS for Financial Engineers
-Nursing 598: Statistical Analysis with SPSS (U of M Flint)

For more information on CSCAR workshops, please visit here.

March Madness...
I strongly believe that statistical modeling can be used to predict success in the NCAA Division I Men's Basketball Tournament. I'm interested in the development of statistical regression models that can be used to predict success, and in the paper above, I present a simple and flexible rating method for predicting success in the NCAA basketball tournament. The paper utilizes a method based on ordinal logistic regression and expectation for prediction. I believe that the RPI is a numerically flawed rating system that receives an unfair amount of weight in selecting and seeding teams for the tournament, and I have shown over the last five years that my models are comparable to or better than the RPI in terms of predicting success in the tournament.

If numerical ratings like the RPI are going to be considered in seeding teams selected for the tournament, the selection committee should focus on the ratings that do the best job of actually predicting success in the tournament, or pre-tournament ratings that correlate very well with actual success. The "best" ratings can be used to identify teams that are likely to do well in the tournament (and thus teams that are most eligible to compete for the national championship). I collect data on RPI ratings, Jeff Sagarin's computer ratings, and the predictors of success that I consider in my models, and then calculate predicted success in the tournament (which can be translated into a rating) based on my models. You can view the updated 2014 predictions and results, in addition to results from previous years, here.

In the 2014 tournament, my predictions had a correlation with actual success (0.600) that was higher than the pre-tournament RPI ratings (0.500), BPI ratings (0.491) and Sagarin ratings (0.520). This has been the case for six of the past seven years. Feedback and comments are more than welcome!

Bowl Madness...
I'm also interested in the possibility that statistical modeling can be used to predict the outcomes of college football bowl games, and in this paper published in the Journal of Quantitative Analysis in Sports, my colleague Madhur Lamsal and I consider a straightforward application of statistical modeling in determining whether team-level variables were able to predict the actual bowl game outcomes in the 2007-2008 bowl season. I also consider applications of the predictions in the development of ratings for college football teams, based on a round-robin playoff scenario.

Results dating back to 2008 can be found below.

2008-2009 Bowls: Predictions and Results (58.8% accuracy)
2009-2010 Bowls: Predictions, Results and Ratings (55.9% accuracy)
2010-2011 Bowls: Predictions and Results (62.9% accuracy)
2011-2012 Bowls: Predictions and Results (62.9% accuracy)
2012-2013 Bowls: Predictions and Results (77.1% accuracy)

Articles referencing the method have appeared in the New York Times, the Ann Arbor News, and the Kansas City Star.

Constructive comments and feedback are more than welcome. Please keep in mind that I do all of this for a hobby, for fun. I do not get paid by anyone to produce these ratings, and I do not have the time to look at every possible predictor of success! I'm always open to advice about data resources where additional (and more informative) team-level statistics can be found. All of these models are certainly in their infancy, and some of the predictions may definitely look odd (of course I don't truly believe that Missouri was the third-best football team in the nation in 2008...I was purely reporting predictions based on my very young and under-developed model). I simply ask that people read in to the general methods that I've proposed before making personal attacks of any kind. Thanks!

Check out my music page!

Community Service!
The University of Michigan Circle K
The Detroit Partnership
K-Grams =)

The Presbyterian Church
I'm a member of the First Presbyterian Church of Ann Arbor, where I was involved with the Worship Committee for the past five years. I have also been a Deacon for Chapel 27 here in Ann Arbor, and I was co-Moderator of the Board of Deacons in 2011. Click here to find out more about the Presbyterian Church of the USA.

More stuff...
Click here to see a picture of me and my wife Laura! =)
billiards.com
U of M Billiards
Bananas: IM Campus Champs!
A link to this page when it was a Birthday Present from my friends James DeVaney and Matt Comstock.
Goin' To Work!
Good drops.

This page is constantly under construction, so visit again soon!

Last modified 11/4/14 by Brady T. West