CP Logo

University of North Carolina at Asheville College Portrait

Home Compare At A Glance Contact

EXPLORE THIS COLLEGE PORTRAIT

University of North Carolina at Asheville Learning Outcomes

As the UNC system’s designated undergraduate liberal arts campus, teaching and learning are at the core of the UNC Asheville experience. We use a variety of mechanisms to assess student learning and growth throughout their career at UNC Asheville. In addition to administering standardized measures such as the Collegiate Learning Assessment and the National Survey of Student Engagement, UNC Asheville faculty collaborate to develop embedded assessments within each academic program and for the Liberal Arts Core curriculum.   Many of these embedded assessments use rubrics such as the AACU VALUE Rubrics. 




University of North Carolina at Asheville administered the AAC&U VALUE Rubrics in 2014.

University of North Carolina at Asheville conducted a Senior-only benchmarked administration of the AAC&U VALUE Rubrics in 2014. The results are displayed below in the SLO Results tab.

For additional information on UNCA’s process for administering AAC&U VALUE Rubrics, please click on the Assessment Process Tab below. For information on the students included in the administration, please click the Students Tested Tab.

Why did you choose the AAC&U VALUE Rubrics for your institutional assessment?

UNC Asheville faculty believe that done well, embedded assessment helps us improve our curricula, teaching, and student learning more effectively than standardized testing. Embedded assessment should use time effectively, preserve or expand a department’s flexibility beyond its own planning horizon, and preserve or expand a department’s autonomy.

To this end, UNC Asheville provides faculty with a wealth of information on creating and editing rubrics through workshops and the IREP website at https://ierp.unca.edu/program-assessment-resources.  A quick review of these resources reveals that AACU’s LEAP program and the VALUE rubrics appear prominently.  These rubrics were designed to evaluate a 21st century definition of liberal education, were based on essential learning outcomes identified by faculty across the nation and they work hand in hand with authentic assessments which attempt to identify if students can apply their learning to complex problems and real-world challenges.  These reasons all contributed to the adoption of AACU Value Rubrics on our campus.

The Inquiry ARC program settled on the AACU Rubric for Critical Thinking separately after a period of experimentation with a variety of rubrics including some designed on our campus.  Since changing to the AACU VALUE Rubric, data has been more useful to the program.


Which University of North Carolina at Asheville students are assessed? When?

We use VALUE Rubrics in multiple ways at UNCA.

For example, students are assessed at the beginning and end of each Inquiry ARC course.  These courses are part of our Quality Enhancement Plan for SACSCOC and they focus on Critical Thinking.  

Additionally, several academic departments and programs use VALUE Rubrics for their assessment plans. For example the Psychology department uses Critical Thinking, Written Communication, and Oral Communication.  In this case it is a one time test rather than a pre-post test and typically involves senior level courses, although a few use junior level courses.  Another example is Management, which uses the Written Communication Rubric as a part of a larger rubric for a case study project.


How are assessment data collected?

Faculty in the Inquiry ARC program identify an assignment at the beginning and end of each course which they will use to assess critical thinking via the AACU VALUE Critical Thinking Rubric.  At the end of each course faculty submit to the Assessment Team a spreadsheet of each student’s score on each component of the rubric at pre- and post- test. 

Each department that use these rubrics similarly identify an assignment and submit the data to their department’s Assessment Liaison who aggregates the data for department analysis.


How are data reported within University of North Carolina at Asheville?

The Inquiry ARC Assessment Team has primary responsibility to review assessment data from the program.  This data is shared with the Provost, Faculty Senate, the Institutional Effectiveness Committee and the Board of Trustees.

Because the Inquiry ARC assessmnet plan involves a pre-post test we have been able to get growth scores for courses (e.g. how much students critical thinking scores improve during the course of one semester).  An Analysis of the combined data set from Spring 2014 yeilded evidence of significant improvement in each of the five criterion on the rubric.  Preliminary analysis of 2014-15 data shows a continuation of that trend.

Each department  likewise aggregates the data for departmental analysis.  

Where possible we aggregate data at the institutional level for each AACU Value Rubric and the data is reviewed by the Institutional Effectiveness Committee, and then summary reports are shared with pertinent groups including Senior Staff, Faculty Senate, and the Liberal Arts Core Curriculum Committee.


How are assessment data at UNCA used to guide program improvements?

Departments use their data to determine if there are curricula, teaching, or other aspects of their programs.

The Inquiry ARC uses the data to determine if professional development needs changing and each individual instructor uses the data to reflect on possible changes to their course.

The Institutional Effectiveness Committee reviews data at the institutional level to look for campus wide trends that need to be brought to the attention of Senior Staff, FAculty Senate, or other campus groups.  Additionally, feedback is sent back to departments and the IEC sets workshops and other professional development for the next year based on these analyses.


Overall the sample is representative of the population. Our choice to use embedded assessment does result in some selection bias, as students choose the courses they take.  This did result in  higher proportions of females and transfer students (hence no SAT score) than the population.  

Howefver, a review of SAT scores shows remarkable similarities and allows for the use of the sample as one representative of the population.

The VSA advises institutions to follow assessment publisher guidelines for determining the appropriate number of students to test. In the absence of publisher guidelines, the VSA provides sample size guidelines for institutions based on a 95% confidence interval and 5% margin of error. So long as the tested sample demographics represent the student body, this means we can be 95% certain that the "true" population learning outcomes are with +/- 5% of the reported results. For more information on Sampling, please refer to the Research Methods Knowledge Base

The charts below show the distribution of senior student scores on the AAC&U VALUE Rubrics for Written Communication and Critical Thinking. Students are scored at one of four levels: Benchmark, Milestone 1, Milestone 2, or Capstone. The Benchmark level is the level at which most incoming freshmen who begin college immediately after high school would perform. The Capstone level is the level at which senior students about to graduate would perform. All students, regardless of class standing, are scored on the same rubric against the same criteria, so it is expected that the distribution for senior scores would be centered farther to the right (closer to the Capstone level).

Critical Thinking Detail

The charts below show the distribution of student scores on the subscales of the Written Communication and Critical Thinking rubrics. Each rubric consists of five dimensions that students are rated on individually.

Written Communication Detail

The charts below show the distribution of student scores on the subscales of the Written Communication and Critical Thinking rubrics. Each rubric consists of five dimensions that students are rated on individually.