What is the old view of organizational systems?
Demographically diverse, continuous learning, self-managed work teams, focus on self-reliance.
Stable and predictable, hierarchy, top-down command and control.
What is the contemporary view of organizations?
Globalized and focus on speed, constantly changing, organized around networks, more demographically diverse, continuous learning, self-managed work teams, focus on self-reliance.
Which of these does not fit the 3 C's of the past manager role?
Command.
Conquer.
Control.
Compartment.
Which if these does not fit the contemporary perspective for managers?
Incumbents to Managers
Controllers to Coaches.
Planners to Facilitators.
Inspectors to Mentors.
Which source has not resulted in changes in jobs and job performance in recent times?
Technology.
Globalization.
Economic Stability.
Mergers & acquisitions.
Which of these is a benefit of recent changes in jobs and job performance?
Challenge.
Increased Personal Control.
Insecurity.
All of the Above.
None of the Above.
Both A and B.
Both A and C.
What defines each job in terms of the behaviors necessary to perform it and is used to develop hypotheses about the personal characteristics necessary to perform those behaviors?
Job specifications
Minimum qualifications
Job analysis
Job descriptions
The following characteristics are representative of what? Qualifications, experience, training, skills, responsibilities, emotional characteristics, sensory demands.
Job description
The following characteristics are representative of what? Job title, job location, job summary, working conditions, job duties, conditions of employment, hazards, social environment.
Job descriptions are useful for:
Selection
Training
Person power planning
Job classification
All of the above
None of the above
The first step of conducting a job analysis is
Writing task statements
Rating task statements
Determining essential KSAOs
Gathering existing information
Which is not a method of collecting job analysis data?
SME panels
Questionnaires
ONet
What evidence is available regarding job analysis reliability and validity?
Daniels & Weiss
Davidson & Wurtz
Dierdorff & Wilson
Donalds & West
According to the Dierdorff & Wilson(2003) Meta-analysis, who showed higher inter-rater reliability?
Incumbents
Researchers
Analysts
Supervisors
According to the Dierdorff & Wilson(2003) Meta-analysis, what type of analysis is better?
Cognitive
Task focused
Knowledge based
_____________________ scales using importance and difficulty had higher inter-rater reliability than __________________scales.
Frequency, Temporal
Frequency, Descriptive
Descriptive, Frequency
Descriptive, Temporal
What is an evaluative standard, rule, or test by which a person may be judged or measured?
Reliability
Criteria
Validity
KSAO
What criteria should be used to establish MQs for a job?
Influenced directly by the desire to artificially manipulate candidate pool.
Realistically expected to exist in the pool of candidates.
Determined based on the existing qualifications of incumbents or candidates being groomed.
Indirectly linked to functional areas/KSAs
When conducting a job analysis, average ratings of importance or frequency under .5 should be eliminated.
Criterion contamination is when we measure things that are not related to the job.
Sources of criterion contamination include
Prejudice
Racism
Sexism
Bias
Which of these does not belong?
Bias occurs through knowledge of the predictor.
Bias occurs through ratings.
Bias occurs through group membership.
Bias occurs through error.
Ratings can be biased through
Adequate observations
Limited opportunities to perform
Ability to distinguish skills
__________________ of criteria refers to the way employees performance varies over time.
Temporal nature
Standard Error
Job Performance reliability refers to the method of observation and its effect on conclusions.
Examples of the dimensionality of job performance are: job and location, sales performance, leadership.
Criteria may be temporal or dynamic in 3 distinct ways:
Changes over time in rank ordering of scores on the criterion.
Changes over time in the leadership of the organization.
Changes over time in average levels of group performance.
Changes over time in validity coefficients.
Test/Re-test Reliability measurements should be taken over 6 months apart.
Reliability refers to
Creating alternative, equal forms for the same test
Administering the same test on two different occasions
Freedom of unsystematic errors of measurement
Pearson product moment correlation coefficient
Any factor that influences performance on one occasion but not the other might introduce bias and influence the reliability of the measure.
Parallel forms of reliability are also known as
Alternative
Equivalent
Counterpart
A and C
A and B
B and C
To increase reliability, reduce the variability of differences in individuals taking the test.
Higher variability = higher reliability.
The difficulty of a test item should be high, to weed out poor test takers.
The smaller the sample size, the larger the sampling error and lower reliability.
If reliability is 1, standard error of measurement is 0.
Higher standard error = higher reliability.
Standard error of measurement is the standard deviation of the normal distribution of scores that an individual would obtain if they took the same test 100 times.
Internal consistency reliability is
The extent to which all parts of a measure are similar in what they are measuring.
Used to assess the consistency of results across items within a test.
An indicator of the degree to which various items on a test are intercorrelated.
All of the above.
None of the above.
Internal Consistency Reliability according to the textbook is
.90 for a procedure
.80 and above for applied purposes
.70 for research purposes
1
The two (2) widely used methods of Internal Consistency Reliability are
Pearson product-moment correlation coefficient
Kudar Richardson Estimates
Split-half
The most used formula for the Kudar-Richardson Reliability Estimates is
KR-50
KR-10
KR-90
KR-20
Split-half reliability estimate is interpreted as
Coefficient of stability
Coefficient of reliability
Coefficient of equivalence
Coefficient of validity
Coefficient of stability is a measurement of the correlation between
2 parallel forms of the same test
2 time points where the subjects and measuring instrument are the same
2 groups of subjects taking the test
To compute coefficient of stability use test/re-test.
Coefficient of equivalence refers to the correlation between
the same tests given on two different occasions
2 parallel forms
2 different groups of subjects taking the same test
For the coefficient of stability and equivalence, to guard against order effects, both halves of the examinees are given test A, followed by test B.
The main advantages of computing reliability using the coefficient of stability and equivalence are
Random Error responses
Bias responses
Specific factor errors
Transient errors
All of the Above
Coefficient of equivalence is also known as
ρ
α
ε
Σ
Random error is an error caused by unknown and unpredictable changes in the experiment.
Random errors increase the consistency and usefulness of test scores.
Random error varies randomly from occasion to occasion therefore:
obtained scores are equal to true scores
obtained scores are different from true scores
true scores are lower than obtained scores
true scores are higher than obtained scores
Reliability can range from 0 - 1.
Cronbach's alpha is calculated to measure
Test/Re-test reliability
Internal Consistency Reliability
Standard error of measurement helps establish confidence intervals.
Which of these examples are sources of random error?
Poor lighting
Noises
Temperature
Mood
Validity is the extent to which the measurement are repeatable.
The Trinitarian view of validity requires which of these to be considered very valid?
Construct validity
Content validity
Criterion validity
All three separately
The Unitarian view of validity requires which of these to be considered highly valid?
All three equally
Evidence related to the meaning of the construct and its relationship with other constructs is
The extent to which your measure contains a fair sample of the universe of situations it is supposed to represent is :
How well your assessment tool is related to the criteria/test adequacy based on correlations with the criteria is known as
Construct validity can be gathered by calibrating the test against an established measure, known standard, or against itself.
Content validity can be gathered by SMEs, whose responses are evaluated to make decisions about the content.
Problems with Predictive validity include
Small sample size
Uncontrolled variables
Which of the following is a problem with concurrent validity?
Longitudinal nature of the study
Concurrent validity is measured when :
the criterion measure is available after the predictor measure is taken.
the criterion and predictor are collected at the same time.
The predictor measure is taken.
Predictive validity is when a criterion measure is available after the predictor measure is taken.
Challanges of changes in job and job performance include:
Insecurity
Challenge
Flexibility
High costs
Which of these is not a major step in conducting a job analysis?
Identify the tasks performed
Write task statements
Rate task statements
Determine KSAOs
Rate KSAOs