Stats

Descripción

Stats Mapa Mental sobre Stats, creado por nb43 el 22/04/2013.
nb43
Mapa Mental por nb43, actualizado hace más de 1 año
nb43
Creado por nb43 hace más de 11 años
144
2

Resumen del Recurso

Stats
  1. Mixed Factorial ANOVA
    1. Experiment designs: between and within subjects

      Nota:

      • Between Subjects: - different participants in each condition - looks at the differences between groups Within Subjects: - same participants in each condition - differences between the treatments The dependent variable is measured in exactly the same way for each design
      1. Problems for between

        Nota:

        • Participants variables Large group of participants required - impractical Biases lead to false conclusions - assignment, observer-expectancy, subject-expectancy It is possible to assess the baseline measure
        1. Problems for within

          Nota:

          • Practice effects - lack of naivety - the more you do the task, the better you get Longer testing sessions when many conditions.
          1. Factorial Designs

            Nota:

            • one dependent variable two or more independent variables. Used when we suspect that more than one IV is contributing to a DV. Allow exploration of complicated relationships between IVs and a DV
            1. Main effect: how the IVs individual effect the DV - overall trend
              1. Interactions: how IV factors combine to affect the DV
                1. Between Subjects factorial ANOVA
                  1. Within Subjects factorial ANOVA
                    1. Mixed factorial ANOVA

                      Nota:

                      • Efficient uses of participant numbers and individual participant time - reduces cons of other designs. One of the most common types of design. Mixed factorial ANOVA assumptions and formulae are the same as for factorial ANOVA.
                      1. mix of between and within factors

                        Nota:

                        • at least one between subjects factor and one within subjects factor
                        1. Increasing between subjects factors rapidly makes high cost studies non-viable
                          1. Main effect and Interaction formula

                            Nota:

                            • F values MS values SS values F(between df, within df) =  F value, p = p value
                            1. Within subjects
                              1. Between Subjects
                                1. F(between df, within df) = f value, p = p value
                                  1. F values, MS values, SS values
                                2. Assumptions

                                  Nota:

                                  • Interval/ ratio data Normal distribution - histogram Homogeneity of variance - between subjects - Levene's test Sphericity of covariance - within subjects - Mauchly's test No parametric alternatives if these are violated
                                  1. 1. interval/ ratio data
                                    1. 2. normal distribution
                                      1. 3. Homogenity of variance (between, Levene's)

                                        Nota:

                                        • want it to be non-significant
                                        1. 4. Sphericity of covariance (within, Mauchly's)

                                          Nota:

                                          • want it to be non-significant
                                          1. no parametric alternatives if violated
                                          2. TWO RULES
                                            1. use between subjects formulae for between subjects effects and within subjects for within subjects effects
                                              1. if there is a conflict e.g. in interactions, use within subjects
                                              2. N = total number of scores
                                                1. n = number of scores within the condition
                                          3. Correlation
                                            1. Tests of Association

                                              Nota:

                                              • Tests of the relationships between two variables and are usually performed on continuous variables. Tests where there is a shared variance between any given pair of variables. looking for an association between the samples, not a difference (independent samples t-test).
                                              1. Pearson's (parametric); Spearman's (nonparametric)

                                                Nota:

                                                • Also point-biserial correlation - one continuous variable - one cateogrical variable 2 levels And simple linear regression, and multiple linear regression.
                                                1. Pearson's Correlation Assumptions (parametric)
                                                  1. 1. linear relationship between variables

                                                    Nota:

                                                    • A linear relationship means that at any point a given change in x will lead to a change in y. If the scatterplot shows a clear nonlinear relationship do not run a Pearson's correlation. Data isn't suitable for correlation analysis if it has a curving nonlinear relationship.
                                                    1. 2. variables measure interval/ ratio data which are normally distributed

                                                      Nota:

                                                      • as the mean and s.d. only accurately describe the average and dispersal of the data when the data are normally distributed. If frequency distributions fshow a non-normal distribution do not run a Pearson's correlation.
                                                      1. 3. Data should be free of statistical outliers

                                                        Nota:

                                                        • because outliers have disproportionate influence on the correlation statistic or correlation coefficient (r). There is a misrepresentation of data if outliers are included. Either exclude them or use a Spearman's correlation (nonparametric) if they are more systematic.
                                                      2. Spearman's Correlation Assumptions (nonparametric)
                                                        1. 1. monotonic relationship between variables

                                                          Nota:

                                                          • either a positive, negative or curved relationship. Not a bell curve. -
                                                          1. relationship that goes in one direction
                                                          2. 2. works on ordinal/ interval/ ratio data - no need to worry about the distribution
                                                            1. 3. outliers can be included in Spearman's analysed data

                                                              Nota:

                                                              • they do not exert as much influence, this is because Spearman's correlations do not use means or s.d.s but use ranks.
                                                          3. tell us whether variables covary with other variables
                                                            1. Pearson's correlation formula

                                                              Nota:

                                                              • a. For each case, subtract the mean from the score on the X variable; repeat for the mean and score on the Y variable; multiply these two values, then add together the products for all cases. b. For each case, subtract the mean from the score on the X variable; square this difference; add together the squared value for all cases, and then find the square root. Repeat for the Y variable and multiply. Use this value to divide by.
                                                              1. Df = no. of pairs - 2
                                                                1. r(df) = r value, p = p value
                                                                  1. r = correlation coefficient
                                                                    1. indication of the strength of the relationship
                                                                    2. r2 = coefficient of determination
                                                                      1. measure of the strength of the relationship, describes the amount of variance explained
                                                                        1. effect size
                                                                    3. Scatterplots

                                                                      Nota:

                                                                      • typically show relationships between pairs of variables. Each point represents one pair of observations at each measurement point
                                                                      1. Bottom left to top right = positive
                                                                        1. Top left to bottom right = negative
                                                                          1. the spread gives an indication of the strength of the relationship
                                                                            1. Direction and Strength
                                                                              1. If there is low or no spread between the data points then there is a very strong correlation between the variables

                                                                                Nota:

                                                                                • If there is a reasonable spread, then there is a strong correlation between the variab;es.
                                                                                1. r value = 1/ -1

                                                                                  Nota:

                                                                                  • direct diagonal line. when there is a greater spread, the points deviate from 1/-1.
                                                                                2. If there is a high spread then there is low or no correlation
                                                                            2. Interpreting correlations; facts about correlation coefficients
                                                                              1. range from -1 to 1.
                                                                                1. no units
                                                                                  1. they are the same for x and y as for y and x
                                                                                    1. positive values: as one variable increases so does the other
                                                                                      1. negative values: as one variable increases, the other decreases
                                                                                        1. positive relationship - as one value decreases, so does the other
                                                                                          1. the more spread out data are, the more values will deviate from 1
                                                                                            1. how close a value is to -1 or 1 indicates how close the two variables are to being perfectly linearly related
                                                                                            2. R values
                                                                                              1. Estimating r values

                                                                                                Nota:

                                                                                                • 1. plot your scatterplot and divide it accordingly to the mean x and y values in order to estimate your values. 2. count up number of points in each quadrant. A positive correlation will populate the +ve quadrants more than the -ve quadrants and vice versa.
                                                                                                1. Calculating r values - determining whether two values are associated.
                                                                                                  1. 1.Plot the raw values against one another

                                                                                                    Nota:

                                                                                                    • scaling problems - different means and SDs. We don't care about the means etc, only the relationships. If all the values are along the bottom, we must try to look at the data in a way that accounts for the differing means and SDs of each axis - therefore do a z score.
                                                                                                    1. 2. Z score gives you values which have a mean of 0 and a z score of 1.

                                                                                                      Nota:

                                                                                                      • z score = (score-mean)/ SD No scaling or unit problems. Converting raw scores into z scores allows direct comparisons between scores even if they are measured on different scales, and thus enables a comparison of the relative probabilities of each. Z scores are referred to as standard scores because measurement scales are converted into a standardised format (mean = 0, SD =1)
                                                                                                      1. 3. r = the adjusted average of the product for each standardised x-y coordinate pair

                                                                                                        Nota:

                                                                                                        • Top-right and bottom left quadrants produce positive values +/- r Calculate the area between the points and you would do this for every single value you are looking for a relationship between. The outliers would artifically inflate the correlation value (r). Bigger area = larger correlation value - further away from the mean.
                                                                                                        1. the closer to the diagonal a point is, the more it contributes to the r value.
                                                                                                          1. The further away from both means a point is, the more it contributes to r.
                                                                                                          2. r = Σ(zX x zY)/ N -1

                                                                                                            Nota:

                                                                                                            • where zX = X- x̄/ Sx
                                                                                                      2. Limitations
                                                                                                        1. Correlation does not equal causation

                                                                                                          Nota:

                                                                                                          • there can be a casual link but correlation analyses do not allow us to conclude this. To prove causation, the experiment would have to be controlled.
                                                                                                      3. Regression
                                                                                                        1. what is regression?
                                                                                                          1. a family of inferential statistics
                                                                                                            1. Test of association
                                                                                                              1. Help make predictions about data
                                                                                                                1. used when causal relationships are likely
                                                                                                                2. Correlation does not tell you how much to intervene
                                                                                                                  1. line of best fest
                                                                                                                    1. formula of the line gives the exact answer
                                                                                                                    2. Predictions
                                                                                                                      1. it is possible to make predictions about how predictor variables will effect outcome variables
                                                                                                                        1. regression gives an indication of the:
                                                                                                                          1. unstandardised relationship
                                                                                                                            1. between outcome (y-axis) and predictor (x-axis) variables
                                                                                                                              1. using calculations of the intercept and gradient
                                                                                                                                1. expressed in the form Y = a + bX
                                                                                                                                  1. a = intercept/ constant
                                                                                                                                    1. b = gradient/ coefficient
                                                                                                                                      1. in order to determine a, you need to calculate b first
                                                                                                                                2. Assumptions
                                                                                                                                  1. 1. the data are linearly related
                                                                                                                                    1. 2. Homoscedasticity of data
                                                                                                                                      1. residuals
                                                                                                                                        1. residuals are the difference between the actual outcome score and the predicted score outcome
                                                                                                                                          1. need same degree of variation across all predictor variable scores
                                                                                                                                            1. if data are heteroscedastic, a regression isn't the appropriate analysis
                                                                                                                                        2. simple regression
                                                                                                                                          1. predicting one outcome variable from one predictor variable
                                                                                                                                            1. Y = a + bX
                                                                                                                                              1. SPSS output
                                                                                                                                                1. 1. descriptive statistics
                                                                                                                                                  1. 2. correlation coefficient
                                                                                                                                                    1. 3. variables enter and removed
                                                                                                                                                      1. variable entered = predictor variable
                                                                                                                                                        1. dependent variable = outcome variable
                                                                                                                                                        2. 4. model summary (R values)
                                                                                                                                                          1. 5. Check assumptions - graph tests of homoscedasticity
                                                                                                                                                            1. 3 charts at the bottom
                                                                                                                                                              1. frequency plot of standardised residuals
                                                                                                                                                                1. histogram of residual values
                                                                                                                                                                  1. want normal distribution
                                                                                                                                                                    1. bars should approx fit the normal curve
                                                                                                                                                                      1. good indication of homoscedasticity
                                                                                                                                                                      2. normal plot of regression standardised residual
                                                                                                                                                                        1. points should follow the diagonal line
                                                                                                                                                                        2. scatterplot of regression standardised residual and regression standardised predicted value
                                                                                                                                                                          1. DV = change
                                                                                                                                                                            1. plots standardised predicted y values (x axis) against their corresponding residuals
                                                                                                                                                                              1. want to see a diffused cloud - no distinct patterns
                                                                                                                                                                          2. Determining whether the regression model is statistically valid - 3 R values
                                                                                                                                                                            1. R = pooled correlation
                                                                                                                                                                              1. R2 = amount of variance in the data that is explained by the model (%)
                                                                                                                                                                                1. most important value
                                                                                                                                                                                2. adjusted R2 = how much variance would be expected by chance
                                                                                                                                                                                  1. ANOVA table
                                                                                                                                                                                    1. test of whether the regression model is better than using the mean outcome value (y) for all cases
                                                                                                                                                                                      1. is the model signfiicantly better at predicting another model
                                                                                                                                                                                        1. report R2 than ANOVA result
                                                                                                                                                                                      2. Reporting Results
                                                                                                                                                                                        1. 1. Check descriptives and correlations
                                                                                                                                                                                          1. 2. Check that predictor and outcome variables show a linear relationship (scatterplot)
                                                                                                                                                                                            1. 3. Check that homscedasticity assumption is not violated
                                                                                                                                                                                              1. Report the R2 in the test, and the ANOVA results
                                                                                                                                                                                                1. R2 = , F( , )= , p <
                                                                                                                                                                                                2. Report the coefficients in a table
                                                                                                                                                                                            2. Multiple Regression
                                                                                                                                                                                              1. Predicting one outcome variable from more than one predictor variable
                                                                                                                                                                                                1. Formula: Y = a + b1X1 + b2X2 +b3X3
                                                                                                                                                                                                  1. many participants are needed
                                                                                                                                                                                                    1. Methods
                                                                                                                                                                                                      1. predictors can be entered in many different orders
                                                                                                                                                                                                        1. Simultaneous
                                                                                                                                                                                                          1. all predictors are entered at the same time
                                                                                                                                                                                                            1. use for exploratory analysis
                                                                                                                                                                                                            2. Hierarchical
                                                                                                                                                                                                              1. predictors are entered in a pre-defined order
                                                                                                                                                                                                                1. used when regressions are informed by well-defined theory
                                                                                                                                                                                                                2. Stepwise
                                                                                                                                                                                                                  1. predictors are entered in an order driven by how well they correlated with the outcome
                                                                                                                                                                                                                    1. not used often as unstable
                                                                                                                                                                                                                  2. SPSS output
                                                                                                                                                                                                                    1. 1. Descriptive Statistics
                                                                                                                                                                                                                      1. 2. Correlations
                                                                                                                                                                                                                        1. 3. Assumptions - visual tests for homoscedasticity
                                                                                                                                                                                                                          1. 4. Model
                                                                                                                                                                                                                            1. summary
                                                                                                                                                                                                                              1. how good the model is, R2
                                                                                                                                                                                                                              2. ANOVA significance
                                                                                                                                                                                                                            2. Reporting Results
                                                                                                                                                                                                                              1. 1. Check descriptives and correlations
                                                                                                                                                                                                                                1. 2. Difficult to check for linear relationships
                                                                                                                                                                                                                                  1. 3. Check that homoscedasticity assumption is not violated
                                                                                                                                                                                                                                    1. 4. Report the R2 value
                                                                                                                                                                                                                                      1. R2 = F(df,df) , p =
                                                                                                                                                                                                                                      2. 5. Report the coefficients in a table
                                                                                                                                                                                                                                      3. multicollinearity occurs when variables are highly correlated with each other. This is undesired.
                                                                                                                                                                                                                                      4. Summary
                                                                                                                                                                                                                                        1. Regression analyses allow to make predictions about outcome variables using predictor variables
                                                                                                                                                                                                                                          1. All regressions assume homoscedasticity
                                                                                                                                                                                                                                            1. Simple (bivariate) regression uses one predictor variable. Multiple regression uses more than one.
                                                                                                                                                                                                                                              1. To report regressions:
                                                                                                                                                                                                                                                1. i) report R2 and the ANOVA in the text
                                                                                                                                                                                                                                                  1. ii) report the coefficients in a table
                                                                                                                                                                                                                                              2. Correlation is used to examine the relationship between variables
                                                                                                                                                                                                                                                1. Regression is used to make predictions about scores on one variable based on knowledge of the values of others
                                                                                                                                                                                                                                                Mostrar resumen completo Ocultar resumen completo

                                                                                                                                                                                                                                                Similar

                                                                                                                                                                                                                                                Statistics Key Words
                                                                                                                                                                                                                                                Culan O'Meara
                                                                                                                                                                                                                                                SAMPLING
                                                                                                                                                                                                                                                Elliot O'Leary
                                                                                                                                                                                                                                                FREQUENCY TABLES: MODE, MEDIAN AND MEAN
                                                                                                                                                                                                                                                Elliot O'Leary
                                                                                                                                                                                                                                                HISTOGRAMS
                                                                                                                                                                                                                                                Elliot O'Leary
                                                                                                                                                                                                                                                CUMULATIVE FREQUENCY DIAGRAMS
                                                                                                                                                                                                                                                Elliot O'Leary
                                                                                                                                                                                                                                                TYPES OF DATA
                                                                                                                                                                                                                                                Elliot O'Leary
                                                                                                                                                                                                                                                GROUPED DATA FREQUENCY TABLES: MODAL CLASS AND ESTIMATE OF MEAN
                                                                                                                                                                                                                                                Elliot O'Leary
                                                                                                                                                                                                                                                Statistics Vocab
                                                                                                                                                                                                                                                Nabeeha Yusuf
                                                                                                                                                                                                                                                chapter 1,2 statistics
                                                                                                                                                                                                                                                Rigo Sanchez
                                                                                                                                                                                                                                                Statistics, Data and Area (Semester 2 Exam)
                                                                                                                                                                                                                                                meg willmington
                                                                                                                                                                                                                                                Chapter 7: Investigating Data
                                                                                                                                                                                                                                                Sarah L