University of Nevada, Las Vegas administered the ACT CAAP in Fall 2009 - Spring 2011.
University of Nevada, Las Vegas conducted a Value-added administration of the ACT CAAP in Fall 2009 - Spring 2011. The results are displayed below in the SLO Results tab.
For additional information on UNLV’s process for administering ACT CAAP, please click on the Assessment Process Tab below. For information on the students included in the administration, please click the Students Tested Tab.
The CAAP exam is administered to all freshmen enrolled in ENG 101 as a course requirement. Because of the existing data from freshmen, it is advantageous to test seniors and compare the results between the two groups.
Freshmen were assessed in the Fall of 2009. Freshmen were assessed in their ENG 101 course.
Seniors were assessed in the Spring of 2010 and 2011. Seniors were assessed in various upper-division courses to ensure that a representative sample was obtained.
Assessment data from the CAAP exams was collected by administering either the Critical Thinking module or the Writing Essay module to students during class time. Trained test proctors ensured fidelity of test administration.
All testing materials are sent to ACT for scoring. ACT then sends a data report to UNLV. The Office of Academic Assessment prepares a report, which is reviewed by the Vice Provost for Academic Affairs and posted on the Academic Assessment website. Data is shared with the UNLV campus and is used as part of a data-driven decision making process.
The CAAP data is used to provide information about attainment of the University Undergraduate Learning Outcomes (UULOs). The comparison between freshmen and seniors provides one indicator for the attainment of these outcomes and allows programs to identify strengths and weaknesses within their respective curricula.
Of 5594 freshmen students eligible to be tested, 1063 (19%) were included in the tested sample at University of Nevada, Las Vegas.
Of 15986 senior students eligible to be tested, 490 (3%) were included in the tested sample at University of Nevada, Las Vegas.
Probability sampling, where a small randomly selected sample of a larger population can be used to estimate the learning gains in the entire population with statistical confidence, provides the foundation for campus-level student learning outcomes assessment at many institutions. It's important, however, to review the demographics of the tested sample of students to ensure that the proportion of students within a given group in the tested sample is close to the proportion of students in that group in the total population. Differences in proportions don't mean the results aren't valid, but they do mean that institutions need to use caution in interpreting the results for the groups that are under-represented in the tested sample.
Undergraduate Student Demographic Breakdown
|Eligible Students||Tested Students||Eligible Students||Tested Students|
|Other or Unknown||<1%||1%||<1%||<1%|
|US Underrepresented Minority||22%||<1%||26%||<1%|
|White / Caucasian||37%||46%||47%||16%|
|Low-income (Eligible to receive a Federal Pell Grant)||<1%||<1%||<1%||<1%|
The freshmen sample was representative, as all freshmen students took the CAAP exam in their ENG 101 course.
The senior sample included more students from certain colleges than the expected proportion based on UNLV enrollment data. This is because our sampling technique relied on participation from instructors and the availability of courses with high senior enrollment. We took this information into account when interpreting our results.
The VSA provides sample size guidelines for institutions based on a 95% confidence interval and 5% margin of error. So long as the tested sample demographics represent the student body, this means we can be 95% certain that the “true” population learning outcomes are with +/- 5% of the reported results. For more information on Sampling, please refer to the Research Methods Knowledge Base
The increase in learning on the performance task is at or near what would be expected at an institution testing students of similar academic abilities.
The increase in learning on the analytic writing task is at or near what would be expected at an institution testing students of similar academic abilities.
The charts below show the distribution of student scores on the ACT CAPP Written Communication Test. The ACT CAPP Written Communication Test is scored on a rubric with scores ranging from 1 to 6 at intervals of .5 with 6 representing a higher or better score. Each student’s response is scored by two raters; the ratings distributions for each rater are shown below. The Overall Writing Score is an average of the two ratings
The chart below shows the distribution of student scores on the ACT CAPP Critical Thinking Test. Students receive a scaled score between 1 and 80, with 80 representing a higher or better score