Zusammenfassung der Ressource
What analysis test should be used?
- Qualitative (categorical)
- One categorical variable
Anmerkungen:
- one categorical variable from a single population.
- Chi-square Goodness-of-fit (x^2)
Anmerkungen:
- comparing the frequencies you observe in certain categories to the frequencies you might expect to get in those categories by chance using a contingency table.
- Two categorical variables
Anmerkungen:
- For example you are asking adults which fizzy drink they prefer: Pepsi or Sprite, and comparing the answers given by each gender.
- Pearson's Chi-square test (x^2)
Anmerkungen:
- comparing the frequencies you observe in certain categories to the frequencies you might expect to get in those categories by chance using contingency table.
- Equation:
x^2= SUM OF[ (observed-model)/ model]
- Quantitative (measurement)
- Relationships
Anmerkungen:
- trying to fit a linear model to the data to outline correlation, i.e.
y= bx+c, this model would explain current data and predict future patterns.
- One predictor
(measurement)
- Continuous
- Degree of relationship
(Pearson correlation) r^2
Anmerkungen:
- essentially calculates the gradient of the 'line of best fit', so if r^2=1 or -1 it has a perfect positive/negative correlation. As r^2 gets closer to 0, the correlation becomes much less significant.
- Form of relationship
(Regression)
Anmerkungen:
- simple regression analysis
- Ranks
- Spearman's rs
Anmerkungen:
- A bivariate correlation coefficient that works on ranked data. *Ranking the data reduces the impact of outliers.
*first rank the data, then apply Pearson's equation(r^2)...
- EXAMPLE: want to assess how creativity compares to the position awarded in a storytelling competition. Positions recorded and results of a creative questionnaire. Although they use numbers the ranks are technically categories because has no numerical value although the ORDER does matter..
- Multiple predictors
(multiple regression)
- Multiple Regression analysis
Anmerkungen:
- An outcome is predicted by a linear combination of two or more predictor variables. Outcome= Y, and predictors= X. Each predictor has a regression coefficient 'b' associated with it...
Y= (b0 +b1X1 +b2X2 +......bnXn) + E
- Predictor variables must be chosen based on a sound theoretical rationale- "do NOT just select all random predictors, bung them all into a regression analysis and hope for the best".
- Differences
- Two groups
- Independent measures/
between-subjects design
Anmerkungen:
- so when the data is normally distributed- use independent t-test, if they are not normally distributed use Mann-Whitney.
*so if you put the data in order of DV and assign a group- if normally distributed the two groups should have separated i.e. AAAAAABBABBB
- Independent sample t-test
Anmerkungen:
- establishes whether two means collected from independent samples differ significantly.
- Mann Whitney
Anmerkungen:
- also tests the difference between 2 samples but is non-parametric.
- Repeated measures/
within-subjects design
Anmerkungen:
- both look at differences between 2 conditions that are experienced by 1 group. If data is based on a assumption of normal distribution of ranks use related sample t-test, or Wilcoxen if not.
- Related
sample t-test
- Wilcoxen
signed-rank test
Anmerkungen:
- *NOT to be confused with Wilcoxen's rank-sum test that is similar to independent samples t-test.
*Non-parametric version of related-samples so still looks at comparing the scores of 1 group in both conditions, however the test does not make assumptions of normal distribution.
- Multiple groups
- Repeated measures/
within-subjects design
- Repeated
measures ANOVA
- Friedman's ANOVA
- Independent measures/
between-subjects design
- One independent variable
Anmerkungen:
- e.g. 3 groups(influenced differently i.e. luxury/bargain) of wine experts test 3 wines and mean average for each pt across the 3 wines produced for each group. So comparing 3 groups with 1 iv (average score given to wine)
- One-way ANOVA
- Kruskal-Wallis
- Multiple
independent
variables
- Factorial ANOVA