• wonderlic tests
  • EXAM REVIEW
  • NCCCO Examination
  • Summary
  • Class notes
  • QUESTIONS & ANSWERS
  • NCLEX EXAM
  • Exam (elaborations)
  • Study guide
  • Latest nclex materials
  • HESI EXAMS
  • EXAMS AND CERTIFICATIONS
  • HESI ENTRANCE EXAM
  • ATI EXAM
  • NR AND NUR Exams
  • Gizmos
  • PORTAGE LEARNING
  • Ihuman Case Study
  • LETRS
  • NURS EXAM
  • NSG Exam
  • Testbanks
  • Vsim
  • Latest WGU
  • AQA PAPERS AND MARK SCHEME
  • DMV
  • WGU EXAM
  • exam bundles
  • Study Material
  • Study Notes
  • Test Prep

STAT 3001 FINAL EXAM PREP Statistical Methods and Applications ANSWERED 2026/2027 (5pages)

EXAMS AND CERTIFICATIONS Feb 28, 2024
Preview Mode - Purchase to view full document
Loading...

Loading study material viewer...

Page 0 of 0

Document Text

1. What is the difference between descriptive and inferential statistics? Give an example of each. - Descriptive statistics summarize the characteristics of a data set, such as mean, median, mode, standard deviation, range, etc. Inferential statistics use sample data to make generalizations or predictions about a population or a parameter, such as confidence intervals, hypothesis testing, regression, etc. For example, descriptive statistics can tell us the average height of students in a class, while inferential statistics can tell us how likely it is that the average height of students in the school is within a certain range. 2. What are the assumptions of linear regression? How can you check them? - Linear regression assumes that the relationship between the dependent variable and the independent variables is linear, that the residuals are normally distributed, that the variance of the residuals is constant (homoscedasticity), and that there is no multicollinearity or autocorrelation among the independent variables or the residuals. To check these assumptions, we can use various methods such as scatterplots, histograms, Q-Q plots, residual plots, variance inflation factor (VIF), Durbin-Watson test, etc. 3. What is the difference between parametric and nonparametric tests? Give an example of each. - Parametric tests are based on the assumption that the data follow a certain distribution, such as normal, binomial, Poisson, etc. Nonparametric tests do not make any assumptions about the distribution of the data. Parametric tests are usually more powerful and precise than nonparametric tests, but they require more stringent conditions to be met. Nonparametric tests are more robust and flexible, but they may lose some information or efficiency. For example, t-test and ANOVA are parametric tests that compare means of groups assuming normality and homogeneity of variance, while Mann-Whitney U test and Kruskal-Wallis test are nonparametric tests that compare medians of groups without making any distributional assumptions. 4. What is the difference between Type I and Type II errors? How can you control them? - Type I error is the probability of rejecting a true null hypothesis (false positive), while Type II error is the probability of failing to reject a false null hypothesis (false negative). The significance level (alpha) is the maximum allowable Type I error rate, while the power (1-beta) is the minimum desired Type II error rate. To control Type I error, we can adjust the alpha level or use multiple testing corrections such as Bonferroni or Holm methods. To control Type II error, we can increase the sample size, use a larger effect size, or use a smaller alpha level.


Download Study Material

Buy This Study Material

$7.00
Buy Now
  • Immediate download after payment
  • Available in the pdf format
  • 100% satisfaction guarantee

Study Material Information

Category: EXAMS AND CERTIFICATIONS
Description:

STAT 3001 FINAL EXAM PREP Statistical Methods and Applications ANSWERED 2026/2027 (5pages)

UNLOCK ACCESS $7.00