Applied Multivariate Statistical Concepts, 1e Debbie Hahs- Vaughn (Solutions Manual All Chapters)
(Download link at the end of this file)
- / 4
Answers to Even-Numbered Exercises
Applied Multivariate Statistical Concepts, 1e Debbie Hahs-Vaughn
Chapter 1 Answers to Conceptual Problems
- d—structural equation modeling allows the examination of relationships or prediction
with multiple dependent variables
- a—logistic regression allows the examination of one categorical dependent variable
- c—multiple linear regression provides for examining a continuous outcome
- b—MANOVA is used when there are multiple outcomes with the goal of determining
mean differences
- c—multilevel linear modeling is used to analyze relationships of units nested within
groups (e.g., children within preschool)
- a—cluster analysis is a statistical technique for developing profiles of units (e.g., people)
- d—confirmatory factor analysis allows for grouping of constructs when there is strong
theoretical evidence to support the relationships between variables
- b—discriminant analysis is a technique to predict group membership
- d—propensity score matching allows units to be matched; which the matched groups can
then be used for later inferential analyses
- c—MANOVA is a test of mean differences, applicable when there are two or more
dependent variables
Chapter 2 Answers to Conceptual Problems
- c—see definition
- / 4
- a—cannot make Type II error there
- c—d is an effect size index, a measure of practical significance
- c—of these options, the best to visually examine the relationship between two variables is
a scatterplot
- c—equal sample sizes is not required
- b—pre to post mean differences is best examined using a dependent t test
- c—the intercept is 37000, which represents average salary when cumulative GPA is zero
- d—linear relationships are best represented by a straight line, although all of the points
need not fall on the line
- d—null hypothesis does not consider SS values
- false—with two groups the results of both procedures will always be the same
Chapter 3 Answers to Conceptual Problems
- false—the assumption of independence is the assumption that is most closely aligned
with the sampling design
- c—independence can be viewed with a scatterplot of residuals to predicted; random
display of points indicates evidence of the assumption being met
- d—kurtosis statistic that is within an absolute value of 7 is one form of evidence of
normality
- b—nonstatistically significant Box’s M is evidence that homogeneity of variance-
covariance has been met
- c—the same plots used to screen for independence can be used to screen for homogeneity
of variance; a scatterplot of residuals to predicted; random display of points indicates evidence of the assumption being met
- c—a relatively normal distribution will be evident when skewness is within an absolute
value of 2.0 and kurtosis within an absolute value of 7.0
- c—homoscedasticity applies to multiple linear regression, and assume the variation in
scores for one continuous variable is approximately equal to the variation in scores for 3 / 4
another continuous variable
- a—when all pairs of dependent variables are bivariate normally distributed, linearity
evidence is present
- d—VIF value of 30 suggests multicollinearity
- b—independence is generally met when there is simple random sampling
Chapter 4 Answers to Conceptual Problems
- b—partial correlations correlate two variables while holding constant a third
- a—as variable 3 has the largest correlation with variable 1 and the smallest with variable
2
- c—perfect prediction when the standard error = 0
- false—the intercept can be any value
- false—adding an additional predictor can result in the same R
2
- false—best prediction is when there is a high correlation of the predictors with the
dependent variable and low correlations among the predictors
- no—R
2 is higher when the predictors are uncorrelated
- no—the partial correlation may be larger than, the same as, or smaller than .6
- c—given there is theoretical support, the best method of selection is hierarchical
regression
- no—as discussed, these methods may yield different final models
- no—the purpose of the adjustment is to take the number of predictors into account; thus2
adj R may actually be smaller for the most predictors
- true—that is precisely the situation when we should be the most concerned about
collinearity
Answers to Computational Problems
- / 4