Friday, November 5, 2010

Validity and Reliability

Throughout the readings, esp. in the Luker & Knight texts, as well as a couple of times during lecture, we've talked about validity and reliability - two key concepts in designing and evaluating research (designs, findings, conclusions, etc.). Here's a brief breakdown of some of the types that may come up in your own work for this course, as well as some very basic definitions (for a more thoughtful discussion, refer to Luker):


Validity:
Extent to which a measure reflects a concept – reflecting neither more nor less than what was implied by the definition of the concept (i.e. your measures are valid to the extent that the chosen indicators reflect the concepts as defined).
o   Face Validity: An evaluation of an indicator that, upon inspection, appears to reflect the concept you wish to measure (weakest, not very useful). E.g. Operationalization of concepts is consistent with past literature.
o   Content Validity: To what extent are you develop a question(s) that properly flushes out the concept (does the measure properly reflect the dimension(s) implied by the concept).
o   Construct Validity: Uses multiple lines of evidence to determine your level of validity – gathering data from different sources to see if the same findings, same themes emerge. If they do, you have construct validity and can refer to your methodology as using multiple lines of evidence.

External Validity: The extent to which results may be extrapolated from a particular study to other groups in general.

Reliability:
Extent to which, on repeated measures, an indicator/measurement will produce similar readings. Are the findings replicable? Different types of reliability include:
o   Inter-Rater/Inter-Judge Reliability: Are researchers and respondents are interpreting questions the same way?
o   Test-Retest Reliability: Conducting the same test on a least 2 occasions and ensuring people understand the questions the same way on all occasions.

No comments:

Post a Comment