CP Logo

University of Arkansas College Portrait

Home Compare At A Glance Contact

EXPLORE THIS COLLEGE PORTRAIT

University of Arkansas Learning Outcomes

The main tool through which the University of Arkansas measures student learning is through regular participation in the Collegiate Learning Assessment.  The last time this assessment was performed on the UA campus was during the 2010-2011 academic year.  The introduction to the Collegiate Learning Assessment report explains its purpose and methodology:

“The Collegiate Learning Assessment (CLA) is a major initiative of the Council for Aid to Education.  The CLA offers a value-added, constructed-response approach to the assessment of higher-order skills, such as critical thinking and written communication. Hundreds of institutions and hundreds of thousands of students have participated in the CLA to date.

The institution—not the student—is the primary unit of analysis. The CLA is designed to measure an institution’s contribution, or value added, to the development of higher order skills. This approach allows an institution to compare its student learning results on the CLA with learning results at similarly selective institutions.

The CLA is intended to assist faculty, school administrators, and others interested in programmatic change to improve teaching and learning, particularly with respect to strengthening higher-order skills.

Included in the CLA are Performance Tasks and Analytic Writing Tasks. Performance Tasks present realistic problems that require students to analyze complex materials. Several different types of materials are used that vary in credibility, relevance to the task, and other characteristics. Students’ written responses to the tasks are graded to assess their abilities to think critically, reason analytically, solve problems, and write clearly and persuasively.

The CLA helps campuses follow a continuous improvement model that positions faculty as central actors in the link between assessment and teaching/learning.

The continuous improvement model requires multiple indicators beyond the CLA because no single test can serve as the benchmark for all student learning in higher education.  There are, however, certain skills judged to be important by most faculty and administrators across virtually all institutions; indeed, the higher-order skills the CLA focuses on fall into this category.

The signaling quality of the CLA is important because institutions need to have a frame of reference for where they stand and how much progress their students have made relative to the progress of students at other colleges. Yet, the CLA is not about ranking institutions.  Rather, it is about highlighting differences between them that can lead to improvements. The CLA is an instrument designed to contribute directly to the improvement of teaching and learning. In this respect it is in a league of its own.

The CLA uses constructed-response tasks and value-added methodology to evaluate your students’ performance reflecting the following higher-order skills: Analytic Reasoning and Evaluation, Writing Effectiveness, Writing Mechanics, and Problem Solving.

Schools test a sample of entering students (freshmen) in the fall and exiting students (seniors) in the spring. Students take one Performance Task or a combination of one Make-an-Argument prompt and one Critique-an-Argument prompt.

The interim results that your institution received after the fall testing window reflected the performance of your entering students.

Your institution’s interim institutional report presented information on each of the CLA task types, including means (averages), standard deviations (a measure of the spread of scores in the sample), and percentile ranks (the percentage of schools that had lower performance than yours). Also included was distributional information for each of the CLA subscores: Analytic Reasoning and Evaluation, Writing Effectiveness, Writing Mechanics, and Problem Solving.

This report is based on the performance of both your entering and exiting students.  Value-added modeling is often viewed as an equitable way of estimating an institution’s contribution to learning. Simply comparing average achievement of all schools tends to paint selective institutions in a favorable light and discount the educational efficacy of schools admitting students from weaker academic backgrounds. Value-added modeling addresses this issue by providing scores that can be interpreted as relative to institutions testing students of similar entering academic ability. This allows all schools, not just selective ones, to demonstrate their relative educational efficacy.

The CLA value-added estimation approach employs a statistical technique known as hierarchical linear modeling (HLM).  Under this value-added methodology, a school’s value-added score indicates the degree to which the observed senior mean CLA score meets, exceeds, or falls below expectations established by (1) seniors’ Entering Academic Ability (EAA) scores and (2) the mean CLA performance of freshmen at that school, which serves as a control for selection effects not covered by EAA. Only students with EAA scores are included in institutional analyses.

When the average performance of seniors at a school is substantially better than expected, this school is said to have high “value added.” To illustrate, consider several schools admitting students with similar average performance on general academic ability tests (e.g., the SAT or ACT) and on tests of higher-order skills (e.g., the CLA). If, after four years of college education, the seniors at one school perform better on the CLA than is typical for schools admitting similar students, one can infer that greater gains in critical thinking and writing skills occurred at the highest performing school. Note that a low  (negative) value-added score does not necessarily indicate that no gain occurred between freshman and senior year; however, it does suggest that the gain was lower than would typically be observed at schools testing students of similar entering academic ability.”




University of Arkansas administered the CLA+ in 2015.

University of Arkansas conducted a Value-added administration of the CLA+ in 2015. The results are displayed below in the SLO Results tab.

For additional information on UA’s process for administering CLA+, please click on the Assessment Process Tab below. For information on the students included in the administration, please click the Students Tested Tab.

Why did you choose the CLA+ for your institutional assessment?

After considering various alternatives, the University of Arkansas chose participation in the Collegiate Learning Assessment (CLA+) due to its direct measurement of learning outcomes through the testing of freshman and senior students to provide a value-added metric to assess student learning during students' college careers.


Which University of Arkansas students are assessed? When?

In academic year 2014-2015, random samples of 500 entering freshmen and 500 graduating seniors were invited to participate in the Collegiate Learning Assessment (CLA+).  One hundred and ten freshmen and one hundred and eleven seniors participated.


How are assessment data collected?

Testing data was collected and analyzed by the CLA+ division of the Council for Aid to Education.


How are data reported within University of Arkansas?

Results from the CLA are reviewed and discussed by several members of the academic affairs staff who look for areas of strength and weakness in our scores within the major subsections:  scientific and quantitative reasoning, critical reading and evaluation, and critique of an argument.


How are assessment data at UA used to guide program improvements?

Focusing on the areas in which our students' scores were much higher or lower than expected, reviewers noted these discrepancies and keep this in mind when reviewing programs and services to students.


Of 4518 freshmen students eligible to be tested, 110 (2%) were included in the tested sample at University of Arkansas.


Of 2940 senior students eligible to be tested, 111 (4%) were included in the tested sample at University of Arkansas.


Probability sampling, where a small randomly selected sample of a larger population can be used to estimate the learning gains in the entire population with statistical confidence, provides the foundation for campus-level student learning outcomes assessment at many institutions. It's important, however, to review the demographics of the tested sample of students to ensure that the proportion of students within a given group in the tested sample is close to the proportion of students in that group in the total population. Differences in proportions don't mean the results aren't valid, but they do mean that institutions need to use caution in interpreting the results for the groups that are under-represented in the tested sample.

Undergraduate Student Demographic Breakdown

  Freshmen Seniors
Eligible Students Tested Students Eligible Students Tested Students
Gender Female 55% 65% 53% 56%
Male 45% 35% 47% 44%
Other or Unknown <1% <1% <1% <1%
Race/
Ethnicity
US Underrepresented Minority 19% 19% 16% 20%
White / Caucasian 79% 81% 82% 77%
International 1% <1% 1% 3%
Unknown <1% <1% <1% <1%
Low-income (Eligible to receive a Federal Pell Grant) 22% 32% 30% 32%

The freshman sample of those actually tested had a higher proportion of female students than the general freshman population.  We will consider that fact when interpreting the results.

The VSA advises institutions to follow assessment publisher guidelines for determining the appropriate number of students to test. In the absence of publisher guidelines, the VSA provides sample size guidelines for institutions based on a 95% confidence interval and 5% margin of error. So long as the tested sample demographics represent the student body, this means we can be 95% certain that the "true" population learning outcomes are with +/- 5% of the reported results. For more information on Sampling, please refer to the Research Methods Knowledge Base

The increase in learning on the performance task is below what would be expected at an institution testing students of similar academic abilities.

The increase in learning on the selected-response questions is above what would be expected what would be epxected at an institution testing students of similar academic abilities.

Seniors Detail

The charts below show the proportion of tested seniors who scored at each level of the nine subscales that make up the CLA+. The subscale scores range from 1 to 6 with 6 representing a higher or better score. Due to rounding, subscores may not total 100%.

Performance Task
Analysis & Problem Solving
Writing Effectiveness
Writing Mechanics

The table below shows students' mean scores on the three subscales that make up the Selected-Response Questions section of the CLA+. The students subscores are determined by the number of correct responses in each subsection, with those raw numbers adjusted based on the difficulty of the question set the students received. Individual student scores are rounded to the nearest whole number.

Subscale Mean Student Scores
Scientific & Quantitative Reasoning (Range: 200 to 800) 583.0
Critical Reading & Evaluation (Range: 200 to 800) 580.0
Critique an Argument (Range: 200 to 800) 577.0
Freshmen Detail

The charts below show the proportion of tested freshmen who scored at each level of the nine subscales that make up the CLA+. The subscale scores range from 1 to 6 with 6 representing a higher or better score. Due to rounding, subscores may not total 100%.

Performance Task
Analysis & Problem Solving
Writing Effectiveness
Writing Mechanics

The table below shows students' mean scores on the three subscales that make up the Selected-Response Questions section of the CLA+. The students subscores are determined by the number of correct responses in each subsection, with those raw numbers adjusted based on the difficulty of the question set the students received. Individual student scores are rounded to the nearest whole number.

Subscale Mean Student Scores
Scientific & Quantitative Reasoning (Range: 200 to 800) 568.0
Critical Reading & Evaluation (Range: 200 to 800) 571.0
Critique an Argument (Range: 200 to 800) 565.0