MCQs > Educational Subjects & Techniques > Applied Statistics MCQs > Reliability, Validity, and Multiple-Item Scales in Statistics MCQs

Reliability, Validity, and Multiple-Item Scales in Statistics MCQ

Reliability, Validity, and Multiple-Item Scales in Statistics MCQ

 

1. A method of scale construction in which items are selected for inclusion in the scale because they have high correlations with the criterion of interest is known as:

Answer

Correct Answer: Empirical Keying

Note: This Question is unanswered, help us to find answer for this one

2. Discriminant Validity means our theories tell us that a measure X should be unrelated to other variables such as Y, a correlation of near 0 is taken as evidence of discriminant validity, that is, evidence that X does not measure things it should not be measuring.

Answer

Correct Answer: True

Note: This Question is unanswered, help us to find answer for this one

3. _____________ _ is the degree to which a new measure, X', correlates with an existing measure, X, that is supposed to measure the same construct .

Answer

Correct Answer: Convergent Validity

Note: This Question is unanswered, help us to find answer for this one

4. Tests that involve the presentation of ambiguous stimuli (such as Rorschach inkblots or thematic apperception test drawings) is known as:

Answer

Correct Answer: Projective Tests

Note: This Question is unanswered, help us to find answer for this one

5. The degree to which it is obvious what attitudes or abilities a test measures from the content of the questions posed is called __________ .

Answer

Correct Answer: Face Validity

Note: This Question is unanswered, help us to find answer for this one

6. The degree to which the content of questions in a self-report measure covers the entire domain of material that should be included (based on theory or assessments by experts) is known as:

Answer

Correct Answer: Content Validity

Note: This Question is unanswered, help us to find answer for this one

7. The degree to which an X variable really measures the construct that it is supposed to measure is known as:

Answer

Correct Answer: Construct Validity

Note: This Question is unanswered, help us to find answer for this one

8. When a test developer creates two versions of a test (which contain different questions but are constructed to include items that are matched in content) is called ___________ .

Answer

Correct Answer: Parallel-Forms Reliability

Note: This Question is unanswered, help us to find answer for this one

9. A type of internal consistency reliability assessment that is used with multiple-item scales. The set of p items in the scale is divided (either randomly or systematically) into two sets of p/2 items is known as:

Answer

Correct Answer: Split-Half Reliability

Note: This Question is unanswered, help us to find answer for this one

10. Kuder-Richardson 20 (KR-20) is the name given to Cronbach’s alpha when all items are dichotomous. See also internal consistency reliability.

Answer

Correct Answer: Kuder-Richardson 20 (KR-20)

Note: This Question is unanswered, help us to find answer for this one

11. When a correlation is obtained to index split-half reliability, that correlation actually indicates the reliability or consistency of a scale with p/2 items is called __________ .

Answer

Correct Answer: Spearman-Brown Prophecy Formula

Note: This Question is unanswered, help us to find answer for this one

12. Consistency or agreement across a number of measures of the same construct, usually multiple items on a self-report test is known as:

Answer

Correct Answer: Internal Consistency Reliability

Note: This Question is unanswered, help us to find answer for this one

13. An index of internal consistency reliability that assesses the degree to which responses are consistent across a set of multiple measures of the same construct, usually self-report items is known as:

Answer

Correct Answer: Cronbach’s alpha (α)

Note: This Question is unanswered, help us to find answer for this one

14. A statistic that assesses the degree of agreement in the assignment of categories made by two judges or observers (correcting for chance levels of agreement) is known as:

Answer

Correct Answer: Cohen’s Kappa [κ]

Note: This Question is unanswered, help us to find answer for this one

search