CP Logo

West Texas A&M University College Portrait

Home Compare At A Glance Contact


West Texas A&M University Learning Outcomes

In order to support the mission of West Texas A&M University (WTAMU), the Office of Learning Assessment coordinates the systematic institutional and program level annual assessments of three major areas of student learning. We refer to these areas as: 1) Discipline Specific Knowledge (DSK), 2) the university Academic Core Curriculum (Core), and 3) the university General Learning Outcomes (GLOs). Other universities may reference General Education (Gen Ed) as one major area, but WTAMU’s Gen Ed separates the Core and the GLOs, allowing us a finer granular assessment report of the components of our Gen Ed.

All three of WTAMU major learning assessment areas follow the same annual cycle of reporting. These three common reporting steps allow us to establish routines of gathering, retaining, and summarizing quantitative and qualitative information and administrative and student data.

All units reporting University or program level assessments are recorded and reported to the Provost office via WTAMU’s annual Assessment Quality Assurance Reports and Audit Worksheet. At a glance, administrative stakeholders can use the reports or worksheet to monitor learning assessment reporting throughout the university to see how departments or programs are using data to inform their decisions and continuously improve.

West Texas A&M University administered the ACT CAAP in 2014.

West Texas A&M University conducted a Value-added administration of the ACT CAAP in 2014. The results are displayed below in the SLO Results tab.

For additional information on WTAMU’s process for administering ACT CAAP, please click on the Assessment Process Tab below. For information on the students included in the administration, please click the Students Tested Tab.

Why did you choose the ACT CAAP for your institutional assessment?

We use the ACT developed CAAP to provide a mid-point assessment of our students learning outcomes towards their degree.  The CAAP is a national, standardized assessment instrument, based on professional research and development by ACT—an independent, not-for-profit organization that has provided top-quality assessments and related services for more than half a century (See ACT).  In addition, the CAAP can be correlated to the other ACT produced entrance exams in order to assess academic growth at WTAMU through their core academics or general education courses.  Then after our assessment of the core, WTAMU students are assessed through their major departments by various means, producing the best student prepared for professional success.

Which West Texas A&M University students are assessed? When?

The ACT developed CAAP was selected as a mid-level assessment because the instrument was designed to assess six components of general education (math, writing skills, writing essay, reading, critical thinking, and science) and those components directly align with the student learning outcomes for our university core here at WTAMU.  A random sample of over 1000 students is selected per semester to participate in this course embedded assessment.

How are assessment data collected?

The sample of students are given the CAAP test module as an imbedded assessment in the course that relates to the final course needed to complete that general academic level for that curricula (i.e. College Algebra students taking the math test).

How are data reported within West Texas A&M University?

The office of learning assessment at WTAMU aggregates the data and provides university reports for all appropriate administrative levels.  Then, the university learning assessment committee meets monthly to discuss and distribute findings to university administration and gather feedback for better data gathering processes and academic adjustments.

How are assessment data at WTAMU used to guide program improvements?

The university learning assessment committee consists of administrative representatives of each department who evaluate these findings and determine how best to adjust academic processes as per understanding of programmatic students level of academic success at the university core.

Of 1644 freshmen students eligible to be tested, 269 (16%) were included in the tested sample at West Texas A&M University.

Of 132 senior students eligible to be tested, 47 (36%) were included in the tested sample at West Texas A&M University.

Probability sampling, where a small randomly selected sample of a larger population can be used to estimate the learning gains in the entire population with statistical confidence, provides the foundation for campus-level student learning outcomes assessment at many institutions. It's important, however, to review the demographics of the tested sample of students to ensure that the proportion of students within a given group in the tested sample is close to the proportion of students in that group in the total population. Differences in proportions don't mean the results aren't valid, but they do mean that institutions need to use caution in interpreting the results for the groups that are under-represented in the tested sample.

Undergraduate Student Demographic Breakdown

  Freshmen Seniors
Eligible Students Tested Students Eligible Students Tested Students
Gender Female 51% 56% 54% 57%
Male 49% 44% 46% 43%
Other or Unknown <1% <1% <1% <1%
US Underrepresented Minority 36% 35% 34% 34%
White / Caucasian 23% 62% 61% 62%
International 1% 3% 3% 4%
Unknown 12% <1% 2% <1%
Low-income (Eligible to receive a Federal Pell Grant) <1% <1% <1% <1%

Our sample of approximately 1500 students per year is proportionally representative of the university population in all characteristics.  In addition, we felt this size is more than adequate because it allows a 95% confidence level with less than a 3% margin of error on inferential statistics.

The VSA advises institutions to follow assessment publisher guidelines for determining the appropriate number of students to test. In the absence of publisher guidelines, the VSA provides sample size guidelines for institutions based on a 95% confidence interval and 5% margin of error. So long as the tested sample demographics represent the student body, this means we can be 95% certain that the "true" population learning outcomes are with +/- 5% of the reported results. For more information on Sampling, please refer to the Research Methods Knowledge Base

The increase in learning on the performance task is above what would be expected at an institution testing students of similar academic abilities.

The increase in learning on the analytic writing task is at or near what would be expected at an institution testing students of similar academic abilities.

Writing Detail

The charts below show the distribution of student scores on the ACT CAAP Written Communication Test. The ACT CAAP Written Communication Test is scored on a rubric with scores ranging from 1 to 6 at intervals of .5 with 6 representing a higher or better score. Each student’s response is scored by two raters; the ratings distributions for each rater are shown below. The Overall Writing Score is an average of the two ratings

Critical Thinking Detail

The chart below shows the distribution of student scores on the ACT CAAP Critical Thinking Test. Students receive a scaled score between 1 and 80, with 80 representing a higher or better score