Writing, Administering, & Interviewing

Writing, Administering, &
Interviewing
HOUGEN
Outline for Today
• Item Writing
• Test Administration
• Interviewing in the context of testing
Item Writing
• Item Writing
• Define clearly what you want to measure
• Generate an item pool
• Avoid exceptionally long items
• Keep the level of reading difficulty appropriate for those who will complete
the scale
• Avoid “double-barreled” items that convey two or more ideas at the same
time
• Consider mixing positively and negatively worded items
Item Formats
• The Dichotomous Format
• Two alternatives for each items;
example: True/False
• The Polytomous Format
• Multiple alternatives for each
items; uses distractors; example:
multiple choice; popular in
academic testing
• The Likert Format
• A popular format for attitude and
personality scales that requires
respondents to indicate the
degree of agreement with a
particularly question
• often a 5-point scale:
strongly disagree, disagree,
neither agree nor disagree,
agree, strongly agree
Item Formats
• The Category Format
• Similar to Likert format but uses an even greater number of choices; example:
10-point pain scale
• Checklists and Q-Sorts
• Checklists: respondents choose relevant descriptors/adjectives from a list
• Q-Sorts: respondents categorize adjectives from checklist into piles according
to how relevant the adjective is to them
Item Analysis
• Item analysis is a a set of methods used to evaluate test items;
analyzes item difficulty and item discriminability
• Item Difficulty: defined by the number of people who get a particular
item correct; example: if 84% of people get an item on a test correct,
the difficulty level is .84.
• Discriminability: an assessment of whether the people who have
done well on particular items have also done well on the whole test
• Extreme Group Method: comparing poor and well performing segments
• Point Biserial Method: correlation between performance on the item and
performance on the total test
Item Analysis
• Pictures of Item Characteristics
• Item characteristic curve offers a valuable way to learn about items by
graphing their characteristics: plot test score (X) and the proportion of
examinees who get the item correct (Y)
• Linking Uncommon Measures
• Complex statistical process through which different tests are linked, related;
for example: the SAT uses different items each time, how to we relate
different version of the SAT?
• Items for Criterion-Referenced Tests
• Items must be carefully construct to reflect learning objectives
• Limitations of Item Analysis
Test Administration
• The Examiner and the Subject
• The Relationship between Examiner and Test Taker
• Rapport can influence results!
• The Race of the Tester
• Race effects are small; effects of administrator’s race are negligible
• Language of Test Taker
• Whether or not the language of the test and test taker match can impact reliability and
validity
Test Administration
• Training of Test Administrators
• Different assessment procedures require different levels of training; not all
require a specific degree, but most require specific training
• Expectancy Effects
• Bias introduced to test by what experimenter expects to find
• Effects of Reinforcing Responses
• Inconsistent use of feedback can affect reliability and validity of tests;
example: examiner responses on IQ test are closely monitored
Test Administration
• Computer-Assisted Test Administration
• Aids in streamlining and standardizing testing conditions; remain interactive
and guides test takers through test; provides precision in timing; remove
experimenter bias, etc…
• Mode of Administration
• Self-administered vs. administered by tester
• Subject Variables
• Variables related to the state or trait of the test taker: race, level of anxiety,
gender, amount of sleep before taking test, etc… can be confounding
variables; sometimes require measurement in order to understand how they
affect test results
Behavioral Assessment Methodology
• Reactivity – observer’s reaction when being “checked” increases reliability and validity
• Drift – observer’s move away from strict training rules and regulation toward idiosyncratic definitions of behavior being observed
• Expectancies – expectancy bias can occur with observers as well
• Deception – detecting honesty/lying in employment hiring processes leads to use of integrity tests; analysis of integrity tests also aids in
understanding observer’s honest appraisal of observed behavior, detecting
lying is actually very difficult and people do this unreliably
• Statistical Control of Rating Errors – correlations can be used to determine
the impact of the “halo effect” or the tendency to attribute positive
characteristics to a tester regardless of test performance; statistical control
of error is the result of poor ability to enhance interrater reliability
Interviewing Techniques
• The Interview as a Test
• In some circumstances, an interview is conducted before administering a test
• Interview resembles the test in that is is designed to gather relevant
information pertaining to the test performance
• Reciprocal Nature of Interviewing
• Interviewing is mutual interaction: interviewer and interviewee influence
each other’s behavior and participation, particularly in the realms of:
• Activity
• Mood
Interviewing Techniques
• Principles of Effective Interviewing
• The Proper Attitudes
• More attitude than skill: interpersonal influence ~ interpersonal attraction, how much
the dyad has in common
• Responses to Avoid vs. Effective Responses
• Avoid judgmental or evaluative statements; inappropriate use of probing statements;
hostile responses; false reassurance
• Err of side of maintaining a safe, cooperative environment
• Closed vs. open-ended questions
Interviewing Techniques
• Responses to Keep the Interaction Flowing
• Transitional phrases: “yes”, “I see”, “go on”
• Verbatim playback
• Paraphrasing and restatement
• Summarizing
• Clarification response
• Empathy and understanding
• Measuring Understanding
• 5-point measure of empathy based on Carl Roger’s interviewing approach
that emphasized active listening
Type of Interviews
• Evaluation Interviews
• Open-ended; direct questions; confrontation
• Structured Clinical Interviews
• Specific question; specific order; highly structured interview designed to
match criteria from DSM-5; examples: SCID, MINI
• Case History Interviews
• “biological sketch”; in-depth, chronological history (work, family, medical,
etc…)
• Mental Status Examination
• Psychiatric and neurological examination designed to evaluate emotional or
neurological problems; focus on attention and alertness
Sources of Error in the Interview
• Interview Validity
• Halo effect
• General standoutishness
• Cultural awareness
• Interview Reliability
• Inter-interviewer reliability
• Structured v. unstructured interviews

Don't use plagiarized sources. Get Your Custom Essay on
Writing, Administering, & Interviewing
Just from $13/Page
Order Essay
Still stressed from student homework?
Get quality assistance from academic writers!
error: Content is protected !!
Open chat
1
Need assignment help? You can contact our live agent via WhatsApp using +1 718 717 2861

Feel free to ask questions, clarifications, or discounts available when placing an order.

Order your essay today and save 30% with the discount code LOVE