Statistical Modeling Week 2012
|October 24 - 26, 2012, Boston
|Bridging the Gap Between Theory and Practice - Our 35th Year!|
|Statistical Modeling Week is an annual conference sponsored by Statistical Innovations Inc., featuring applications-oriented seminars focusing on the latest trends in statistical analysis. Led by a group of experts in the field, the seminars teach professionals how to apply and understand today's major breakthroughs in statistical methodology.|
Statistical Modeling Week 2012 will take place in Boston on October 24 - 26, 2012.
Whether you work in marketing, advertising, health, econometrics or evaluation, you will learn to apply these important breakthroughs to your everyday work. And you'll learn more than the techniques - you'll learn the theories behind them.
Numerous leading organizations have taken advantage of our prior seminars -
here is a sample list.
Attendance is limited. Register for Statistical Modeling Week 2012 today!
You can also download our 2012 conference brochure.
| Tuition and Enrollment Information|
October 24-25, 2012. Latent Class and Finite Mixture Modeling ||$1,490|
October 26, 2012. Applications of Latent Class Models with Discrete Choice, MaxDiff, and Rankings Data ||$895|
Note: Discounts are available for multiple course registrations. After enrolling in the first course for an undiscounted tuition, you deduct $50 for each additional course you sign up for (the maximum discount would be $100 if all 3 courses are taken).
Discounts are also available for additional people who sign up from your organization. Their tuition is $795 for the 1-day seminar and $1,390 for each 2-day seminar.
Please note that the discounts will not appear on your order form when you register. We will include the discounts when we send you the invoice for the sessions.
| Key Driver Regression in Practice: Challenges and New Solutions |
, Tony Babinec
NOTE: This course has been canceled
As an alternative, you may be interested in the following:
The American Statistical Association is sponsoring a 2 hour webinar "Correlated Component Regression: A New Method for Analyzing Big Data or Small Data Containing Many Correlated Predictors" which will be presented by Jay Magidson on Thursday, September 27. Registration deadline is September 25.
After introducing the statistical problems associated with near or complete multicollinearity and describing current solutions, this webinar introduces a new regression method – called Correlated Component Regression (CCR), which provides reliable predictions of a dependent variable even with more predictors than cases. Three examples with real and simulated data are used to illustrate analysis based on the CORExpress® software. These data are made available to all attendees.
For more information and to register, please visit the webinar website.
Who Should Attend
Marketing, biomedical and other researchers who want to improve their understanding of regression model development in the presence of many correlated predictors.
Familiarity with linear and logistic regression analysis at an applied level.
The identification of key drivers and their contributions to customer satisfaction, loyalty or other dependent variable is an important application of regression modeling. Many challenges exist in model development and interpreting results, especially when dealing with many correlated predictors and multicolinearity. In this course, we review these challenges and show how advances in high dimensional data analysis suggest new measures of variable importance and allow reliable models to be developed even when the number of predictors exceeds the number of cases! Our applications-oriented presentation provides insight into how the new approaches work through examples and by providing an overview of the relevant theory, supplemented by the supporting equations. We use real and simulated data sets to illustrate the different approaches with XLSTAT®, SPSS®, CORExpress® and various R packages.
- Traditional regression modeling
- Challenges due to many correlated predictors
- Linear regression
- Logistic regression and ROC curves
- Discriminant analysis
- Failure of stepwise procedures
- Controls for overfitting
- Model selection criteria
- Information criteria: AIC, BIC
- Cross-validation statistics
- Correlated predictors and multicolinearity
- Variable reduction based on variable importance
- Case studies/applications of key driver regression
- Orange juice ratings data
- Job satisfaction data
- Importance of including suppressor variables as predictors
- Using cross-validation to assess prediction error
- Penalized regression
- PLS regression
- Correlated Component Regression (CCR)
- Simultaneous model estimation and variable reduction
E. Segmentation: using hybrid models to capture heterogeneity
- Enhancing model performance with suppressor variables
- Scree plots and coefficient path plots
- Interpreting cross-Validation Statistics
What you will learn
Back to Courses
- How to develop reliable models, even with extreme multicolinearity and when # predictors > # cases
- Why many popular variable selection techniques are suboptimal
- About the powerful step-down variable reduction technique in CORExpress®
- About free and commercially available software for interpreting zhigh dimensional data
| Latent Class and Finite Mixture Modeling |
Interest and usage of Latent Class (LC) models continues to grow, due to the fact that they provide better solutions than traditional approaches to cluster, factor and regression analysis when the population is not homogenous. In this 2-day course we introduce LC as a probability model and describe various applications using the latest version of Latent GOLD®. On day 1, we focus on model fitting strategies and the interpretation of output. On day 2, we consider several advanced topics including identification issues, random effects with continuous factors, repeated observations, hidden (latent) Markov models for latent growth, and the computation of
- Basic ideas of latent class analysis
- The concept of local independence
- The general probability model
- Handling nominal, ordinal, continuous and count variables
- LC measurement models
- Discrete vs. continuous factor analysis
- Comparing models and assessing fit
- Inclusion of covariates in LC models
- Identification problems and boundary solutions
- Use of Bayes constants to eliminate boundary solutions
- The problem of local solutions
- Bivariate residuals to diagnose local dependencies
- Case studies and computer demos
- LC regression models
- Relationship to random effects regression
- Model-based clustering / latent discriminant analysis
- Repeated measures / conjoint marketing
- LC Growth models
- Model specification using LG-Equations™
What you will learn
Back to Courses
- How to specify LC cluster, factor and regression / segmentation models
- Interpreting graphical displays of results
- What to look for when examining output
- Basic and advanced uses of the Latent GOLD program
- Strategies for assessing model fit with sparse and non-sparse data
- How to isolate the scale effects in ratings data
- How to develop models with mixed scale types
- Why LC models improve over K-means clustering
| Applications of Latent Class Models with Discrete Choice, MaxDiff, and Rankings Data |
Who Should Attend
This course is intended for researchers and advanced practitioners in marketing, economics, and fields in which consumer demand and choice is of interest.
Participants should have a background in statistics and some familiarity with econometrics, but advanced training is not necessary.
Latent Class (LC) models are natural tools to analyze discrete choice, MaxDiff and rankings data to identify segments with differing preferences. These models are widely used to forecast market share, design optimal products and services, and more. This course begins by introducing the theory and practical applications of these models in traditional choice, rating, ranking, MaxDiff and constant sum experiments in conjunction with the latest version of the Latent GOLD Choice program. We then show how to extend these models to improve interpretation by separating out potentially confounding scale factors and incorporating additional data from the survey
into the model to determine absolute as well as relative preferences. Examples include two case studies, one covering a trade-off among potential customers with a selection of ‘best’ and ‘worst’ from alternative configurations for a new product, and the other from a more traditional choice design.
- Development of Excel-based simulators
- Stated preference vs. revealed preference data
- Experimental designs for stated preference
- Independence of Irrelevant Alternatives (IIA)
- Accounting for segment differences -
Latent Class vs. HB
What you will learn
Back to Courses
- Overview of the theory of choice modelling
- The LC Choice approach – comparison with HB
- The family of choice model specifications/types
- Alternative approaches to experimental designs
- How to specify, estimate, and interpret the results from choice models
- Advances in Choice Modelling