null
US
Entrar
Registre-se gratuitamente
Registre-se
Detectamos que o JavaScript não está habilitado no teu navegador. Habilite o Javascript para o funcionamento correto do nosso site. Por favor, leia os
Termos e Condições
para mais informações.
Próximo
Copiar e Editar
Você deve estar logado para concluir esta ação!
Inscreva-se gratuitamente
8420586
PS2010
Descrição
PS2010 mind map
Sem etiquetas
statistics
spss
psychological research methods
research methods
analysis
psychology
Mapa Mental por
Jada Khan
, atualizado more than 1 year ago
Mais
Menos
Criado por
Jada Khan
mais de 7 anos atrás
22
0
0
Resumo de Recurso
PS2010
SPSS
stats tests
Chi-squared
analyse nominal/frequency/categorical data
nominal data: describes the group a P belongs to
NON-PARAMETRIC
Differences between conditions: COMPARISIONS
ANOVA, t test
t tests
ERRORS
Type 1: THE WORST!!!!!
Finding effect in sample, accepting H, but NO effect in real life
Incorrectly finding signif. effect
Cannot avoid unless testing ENTIRE population! 5% error allowed ALPHA LEVEL (p <.050)
Type 2
Found no effect in sample, accept null H, but there ARE effects in real world
PARAMETRIC ANALYSIS
Homogeneity of variance
LEVENE'S TEST
Not signif.: (p > .050)
ASSUMPTION HAS BEEN MET
"equal variances assumed" row
p GREATER THAN .050
Signif.: p < .050
ASSUMPTION HAS BEEN VIOLATED
"Equal variances not assumed" row
p LESS THAN .050
Independent t test
SPSS output
"Independent samples test"
LEVENE'S
t (df) = t statistic, p value
descriptives
Population (N) individual group (n)
include M and SD
Confidence intervals (CI)
CI of a mean: uses N and SD -> lower and upper score
Can be 95% certain the upper and lower scores will reflect any sample taken from data set - even if exp is replicated
WRITE UP
comparing groups
"Group stats"
Null hypothesis - A = B, no direction stated
After analysis - either accept (if p is NS) or reject null hypothesis (p is Sigif.)
If SPSS output - p = .000 -> p < .001
Null hypothesis - A = B, non directional; easiest H to make if there is no previos research
After data analysis - either accept or reject Null H
Accept: there is a relationship
Reject: there is no relationship
Repeated t test
SPSS output
"Paired samples test"
t (df) = t statistic, p value
no homogeneity of variance or Levene's test
violates assumptions of independence of obs.; random variance (WITHIN groups) is reduced
SPHERICITY ASSUMPTION
W = W, x2 (df) = x2, p = p value
One sample t test
compares data from a single group to a "reference value"
eg. population
SPSS output
"one sample t test" -> test value (reference)
interpreting direction: is sample M signif higher/lower than test value?
ANOVA: Independent measures
Analysis of Variance; compares many groups
3 MAIN ASSUMPTIONS
normal distribution of DV
homogeneity of variances
LEVENE'S
independence of observations
SPSS OUTPUT
"Between subjects factors" + "descriptives"
"Levene's test of equality of error variances"
F (df1, df2) = F ratio, p = p value
if Levene's is NS (p > .050)
GOOD - Assumption has been met
Bonferroni
(few pairwise comparisons)
adjusts alpha level: .050 / no. of conditions
Tukey
(many pairwise comparisons)
Levene's = signif (p < .050)
BAD - Assumption has been violated
Games-Howell
"Tests of bet. subjects effects"
F (model df, error df) = F ratio, p = p value
eg. if IV = time, DV = no. words recalled -> main effect of time is reflected through this write up
write up
explain analysis used
has asumption of homogeneity been met?
report and interpret dirction of main effect
"Source" error = random variance
"mean square" time: explained V; error: random V
big F ratio = likely signif !!!
Preferred to multiple t tests because...
eg. w/ two t tests = 5% error x 2 = 10% error
increased chance of making a Type 1 error w/ multiple t tests
Familywise error: making one or more type 1 errors
F ratio: ratio of explained (experimental) to random variance
Exp / random
ANOVA: Repeated measures
directional H - planned contrasts
non-directional H - post hocs are limited for rep. anova
use bonferroni
SPSS
(don't need to look at "multivariate tests") !
"within sub. factors" - coding variables for contrast interpretation
descriptives
Mauchly's test of sphericity
if NS: assumption of sphericty has not been violated
"sphericity assumed" rows - main effect stats
is S: sphericity been violated
GREENHOUSE GEISER (alters df)
W = Mauchly's W, X2 (df) = Chi squared value, p = sig.
Repeated measures ANCOVA
EMM's rather than descriptives
Mauchly's
Confound: F (covariate df, error df) = F ratio, p = sig.
IV * confound interaction row can be ignored in SPSS output (usually NS)
"tests of within subjects contrasts" - ignore interaction and error rows
G-G if Mauchly's is S
Two-way repeated ANOVA
Two-way Mixed ANOVA
1 (or more) IV that has independent p's+ 1 (or more) IV that has repeated p's
eg. examining whether the importance of looks (IV1) differs for males and females (IV2). DV: likeliness of going on 2nd date
SPHERICITY: repeated IV
LEVENE'S: independent IV
interpreting main effect + interaction - MEANS and plots
break down interaction: split file for either looks / gender; repeated ANOVA
Three-way mixed ANOVA
3 IV's - independent and repeated measures
if there are only 2 levels of each IV - ASSUME SPHERICITY
EMM'S for interpretation
IV1 main effect
IV2 main effect
IV3 main effect
Interaction effects
break down w/ split file
effects of indep. and repeated V's are presented in diff parts of output !
2 IV's: same p's in different conditions of each IV
eg. examining whether looks (IV1) and personality (IV2) have an effect on attractiveness (DV)
SPHERICITY
W = Mauchly's W, X2 (df) = Chi squared value, p = sig.
ANCOVA
takes into account a covariate (variance caused by confound)
when analysisng a covariate - could explain some of the random variance
SPSS
EMM's
adjusted using covariate
interpreting output requires EMM''s rather than unadjusted descriptives
"tests of between subjects effects"
will display 2 results above "error": first will be the covariate, second will be the IV
was homogeneity of variance met? was the covariate S? was the main effect S?
covariate must be continuous and binary
Stage 1
does the covariate explain a signif amount of variability in the DV
Stage 2
after controlling for the covariate, is there more exp V than random V?
more random V than exp V: ANOVA NS
covariate explains small amount of random V: covariate NS
thus, ANCOVA will be NS
more random V than exp V: ANOVA NS
covariate explains a lot of the random V: covariate S
more exp V than random V: ANCOVA is S
Factorial independent ANOVA
two-way: 2 variables / factors
different p's in each cond.
eg. sex and alcohol consumption (IVs) on aggressiveness (DV)
examines main effects (effect of each factor on its own) AND interactions between the factors
break down MAIN EFFECTS if 3+ cond.'s
line graph /plots reveal significance if:
lines are not parallel (going in diff directions) - but S depends on angle!
lines are crossing
2 conditions only need to report MEANS "estimates"
breaking down interaction effect
separate independent t tests for each level of IV1 to compare to IV2
"independent samples t test"
factorial ANOVA vs one-way ANOVA
ADVANTAGES
analysing interaction effects
adding variables reduces error term - accounting for random variance
SPSS output
LEVENE'S
"tests of between subjects effects"
"source" - IV1, IV2, IV1 * IV2
"multiple comparisons" - post hoc output
Relationships between variables
Correlation, regression
Complex correlations
Partial correlations
PEARSON'S r
r values range from perfect negative (-1) - perfect positive (+1)
line of best fit - represents DIRECTION of relationship
residuals: diff betw. raw data point and line of best fit
smaller residuals - more accurate model - line of best fit reduces random variability
OUTPUT
"CORRELATIONS": R VALUE, P VALUE, N
r = .660 POSITIVE relationship
r = -1.03 NEGATIVE relationship
correlation does not imply causation
Multiple regression
Complex regression models
assumptions
4
multicollinearity
distribution of residuals
homeoscedacity
outlier effects
Categorical variables in regression
beyond simple correlations: analysing 2 + CONTINUOUS variables
PREDICTIVE MODELS
outcome variable
predictor variable
"IVs"
"DV"
OUTPUT
PEARSON'S r
R2 and adjusted R2
explained variance in outcome V by predictor V
Factor and reliability analysis
Advanced stats
effect size
power analysis
BASICS
Data view: enter data
Variable view: define variable properties
Decide if variable is a continuous score or categorical definition
Continuous data (always changing/ not fixed; IQ, age) : just enter raw data
Measure: SCALE
Categorical data (fixed; sex): needs to be coded
Measure: NOMINAL
Before analysis: list, name all variables + demographics
Row represents one variable
For catergorical, define VALUES and MEASURE
Row (across) - participant data
Column (down) - variable
Planned contrasts
based off directional H
one-tailed
DEVIATION
Contrast 1
2 vs 1,2,3,4 etc
Contrast 2
3 vs 1,2,3,4
Contrast 3
4 vs 1,2,3,4
compares effect of each cond. (except 1st) with overall effect
SIMPLE
compares effect of each cond. to 1st (reference)
Contrast 1
Contrast 2
Contrast 3
1 vs 4
1 vs 3
1 vs 2
DIFFERENCE
compares effect of each cond. to overall effect of previous cond's
Contrast 1
Contrast 2
Contrast 3
2 vs 1
3 vs 2,1
4 vs 3,2,1
opposite to HELMERT
HELMERT
compares effect of each cond. to all following cond's
Contrast 1
Contrast 2
Contrast 3
3 vs 4
2 vs 3,4
1 vs 2,3,4
opposite to DIFFERENCE
REPEATED
compares effect of each cond. to the next cond. only (not all following cond's)
Contrast 1
Contrast 2
Contrast 3
3 vs 4
2 vs 3
1 vs 2
6
no. of conditions - 1 = no. of contrasts
POLYNOMIAL
looks at patterns in data
trend analysis only appropriate for continuous IV's !
Linear trend
Quadratic trend
Cubic trend
2 changes in direction
1 change in direction
straight line
Post-Hoc
based off non-directional H
two-tailed
Descriptive stats
Measures of central tendency
Mode, median, mean
Dispersion
Range, variance, SD
Experimental variance: variability between conditions
experimental manipulation
GOOD: likely significaant
large F ratio - significant
explained by more exp variance compared to random variance
Random variance: variability within conditions
measurement/ human error
Indiv. diffs.
unaccounted for/ unmeasured varaibles
BAD: not likely to be significant
Inferential stats: tell us if data is signif. or not
PARAMETRIC
4 assumptions
Interval / ratio data
interval: - values possible
ratio: - values not possible
normal data distribution
Independence of observations
responses from P's (observations) should not be influenced by each other
Homogeneity of variance
"SAME" pattern of variance in all groups
Positively skewed: tail in + direction (right)
Negatively skewed: tail in - (left)
HIGH scores over-represented
LOW scores over-represented
Ideal because we need to be confident of the mean differences
NON-PARAMETRIC STATS
No normal distribution needed!
No homogeneity of variance needed
RANKING DATA
SPSS analysis is based around RANKS rather than actual data
report MEDIANS! ("statistics")
Independent
2 conditions
Mann-Whitney / Wilcoxon Rank-Sum
Independent t test
U = Mann-whitney U, z = Z, p = Asymp. sig.
3 + conditions
Kruskal-Wallis
Independent ANOVA
H (df) = Chi-squared value, p = Asymp. sig.
Repeated
2 conditions
Wilcoxon-Signed-Rank
Repeated t test
T = (smallest value under "Ranks" table - "mean rank" column), p = asymp. sig.
3 + conditons
Friedman's
Repeated ANOVA
X2 (df) = chi-squared value, p= asymp. sig.
Testing assumptions
interval/ ratio data
normal distribution
histogram
skewness
normality tests
SPSS output
"tests of normality" - under this will be the name of test to report
D (df) = statistic, p = sig.
if S (p < .050) = BAD (significantly different from a normal dist.
NS = GOOD
Kolmogorov-Smirnov: D(df) = statistic, p = sig.
homogeneity of variance
LEVENE'S (F ratio)
Questionnaire design
reliability - consistency
internal consistency of each item (esp. if in subscale): consistent scores
validity - is the measure measuring what it claims to?
be specific (eg. rather than "regualrly", use "weekly, daily" etc
no double negatives / double barrelled Q's (eg. no two issues in one Q)
give option of not responding to any sensitive Q's
Consistent response options! (eg. strongly agree, agree...)
Open Q's
advantages
unrestricted response
detail and additional info- qualitative data
disadvantages
difficult to analyse + summarise responses
time consuming for respondent -> participant effects ?
Thematic analysis
identify main themes that emerge from data
validity: do the themes reflect what the p's said/ meant
inter-rater reliability in analysing data / extracting themes
reflexivity: awareness that researcher can never be unbiased
Interpretative phenomenological analysis (IPA)
most appropriate for finding answers about the experiences of certain groups
sampling methods
purposive sampling: homogenous p's
idiographic approach: individual cases
double hermeneutic: p makes sense of experience; researcher understands + interprets
phenomenology: the phemomena that we see in the world around us
what the researcher brings to the text is important for analysis; engages with p's account of phen. rather than phen. itself
analysisng p's account of depression rather than depression as a condition
GOAL : generate list of master themes (inclu. p's shared experience and essence of phen.)
most appropriate for finding themes in population
Content anaysis
derives semantic themes from TA and looks at their occurrence
Closed Q's
advantages
quick and easy to analsye / code
disadvantages
fixed choice of responses
inter-rater reliabilty and bias in coding
closed responses
categorical (male / female)
likert scale
ranking items in order
possibility of acquiescence bias (always agreeing w items)
Interviews and focus groups
identify key themes and terms
Unstructured
no set questions
Semi-structured
Q's set as a GUIDE
Focus group - group interview; interaction between p's is a source of data
similar structure to semi-structured interviews
less artificial than one on one interview
less appropriate for sensitive topics
Quer criar seus próprios
Mapas Mentais
gratuitos
com a GoConqr?
Saiba mais
.
Semelhante
Psychology A1
Ellie Hughes
Statistics Key Words
Culan O'Meara
History of Psychology
mia.rigby
Biological Psychology - Stress
Gurdev Manchanda
Bowlby's Theory of Attachment
Jessica Phillips
Psychology subject map
Jake Pickup
Memory Key words
Sammy :P
Psychology | Unit 4 | Addiction - Explanations
showmestarlight
The Biological Approach to Psychology
Gabby Wood
SAMPLING
Elliot O'Leary
Chapter 5: Short-term and Working Memory
krupa8711
Explore a Biblioteca