CP Logo

Appalachian State University College Portrait

Home Compare At A Glance Contact

EXPLORE THIS COLLEGE PORTRAIT

Appalachian State University Learning Outcomes

Student learning at Appalachian is represented by the four goals of the general education program: thinking critically and creatively, communicating effectively, making local to global connections, and understanding responsibilities of community membership.  These goals are meant to be addressed throughout the curriculum and are assessed within the general education program and by the various departments in their efforts to assess learning in their majors.  In both cases, student accomplishment of these goals is determined through analysis of student work produced in response to classroom assignments.  The results of these analyses inform changes in the delivery of the general education curriculum and also instruction in the majors.  Major programs also use the assessment of student artifacts to determine the level of student learning of the unique content in their specific majors.

Appalachian also surveys students regarding their opinion about the extent of their learning.  In past years we have used the Graduating Senior Survey (GSS) and the National Survey of Student Engagement (NSSE) for this purpose.  In 2015 (n=1999) 94% responded on the GSS that their experience at Appalachian had contributed “somewhat” or “very much” to their writing skill.  Likewise 95% responded that Appalachian had contributed “somewhat” or “very much” in enhancing their analytical thinking skills.  The 2015 NSSE results (n=386) showed that 77% of seniors thought that Appalachian had contributed “Very much” or “Quite a bit” to their ability to “write clearly and effectively.”  Also, 87% responded that Appalachian had contributed “Very much” or “Quite a bit” to their ability to “think clearly and analytically.”  The General Education program used student surveys to determine to what extent the General Education was achieving the desired results.  The questions for the survey were adapted from the VALUE rubrics developed by the American Association of Colleges and Universities (AAC&U).  The results are informing the University’s continuing efforts to improve the program

Results of select questions from the Graduating Senior Survey (GSS) and NSSE were disaggregated by department. Departments used these results to inform their efforts to improve teaching effectiveness.  The results for all the questions on the GSS and the NSSE are posted on the university’s assessment website:




Appalachian State University administered the CLA+ in 2013 - 2014.

Appalachian State University conducted a Value-added administration of the CLA+ in 2013 - 2014. The results are displayed below in the SLO Results tab.

For additional information on Appalachian’s process for administering CLA+, please click on the Assessment Process Tab below. For information on the students included in the administration, please click the Students Tested Tab.

Why did you choose the CLA+ for your institutional assessment?

Appalachian State University participated in the VSA Learning Outcomes Pilot by administering the Collegiate Learning Assessment during the 2007-08, 2010-11, and 2013-14 academic years.  Membership in the VSA and use of the CLA was mandated by University of North Carolina General Administration for all schools in the system.

At Appalachian, test results are shared with campus academic leaders and are posted in our VSA College Portrait.  The results in 2007-08, 2010-11 and 2013-14 administrations of the CLA indicated that our students’ academic gains from freshman to senior year were “as expected” for students like ours.  These findings, combined with the research methodology, limit the value of the test to provide actionable information.  Regarding the research methodology, in both the 2007-08 and 2010-11 administrations, around 100 freshmen and 100 seniors were tested.  In 2013-2014 as part of a UNC system pilot project 200 freshmen and 200 seniors were tested.  Those numbers are too small to disaggregate to the program level where curricula and degree requirements exist. Disaggregation of student learning outcomes at least to the academic department level is necessary for campus officials to know how students’ academic experiences affect the gains in learning as measured by the CLA.  Further, a random sample of students were approached to take the CLA, but only willing, paid volunteered actually took the tests.  For these reasons, the value of the CLA test results to help inform campus decisions about student learning outcomes is limited.


Which Appalachian State University students are assessed? When?

In following the testing model prescribed by the CLA in 2013-14 we tested 200 freshmen during their first semester on campus (Fall 2013) in the fall and 200 seniors during their last semester (Spring 2014) on campus in the spring.


How are assessment data collected?

Students who agreed to take the CLA were asked to schedule themselves into testing sessions held in a computer lab on campus.  They took the test online in one sitting of approximately an hour.


How are data reported within Appalachian State University?

The results of the CLA were reported to the college deans who in turned shared them with faculty within their colleges.  Using the data provided by the CLA we were also able to compare the results of native students with transfer students.  The small sample size made other meaningful disaggregation impossible. 


How are assessment data at Appalachian used to guide program improvements?

Because of the state of flux of Appalachian’s General Education program there was little we were able to take away from our experience so far with the CLA.  If we continue to use it we might get some indication of the success of our recent General Education revisions.


Of 2883 freshmen students eligible to be tested, 207 (7%) were included in the tested sample at Appalachian State University.


Of 2729 senior students eligible to be tested, 217 (8%) were included in the tested sample at Appalachian State University.


Probability sampling, where a small randomly selected sample of a larger population can be used to estimate the learning gains in the entire population with statistical confidence, provides the foundation for campus-level student learning outcomes assessment at many institutions. It's important, however, to review the demographics of the tested sample of students to ensure that the proportion of students within a given group in the tested sample is close to the proportion of students in that group in the total population. Differences in proportions don't mean the results aren't valid, but they do mean that institutions need to use caution in interpreting the results for the groups that are under-represented in the tested sample.

Undergraduate Student Demographic Breakdown

  Freshmen Seniors
Eligible Students Tested Students Eligible Students Tested Students
Gender Female 55% 67% 56% 65%
Male 45% 33% 44% 35%
Other or Unknown <1% <1% <1% <1%
Race/
Ethnicity
US Underrepresented Minority 13% 17% 10% 11%
White / Caucasian 85% 77% 88% 78%
International 1% <1% <1% <1%
Unknown 1% 5% 2% 11%
Low-income (Eligible to receive a Federal Pell Grant) 24% 37% 27% 25%

Our senior sample was proportionately similar in gender composition to the class-at-large.  It was also similar in relation to the proportion of minority and non-minority students.

Our freshman sample was proportionately higher in females than the class at large. We will take that into account when interpreting the results and implementing any action plans.  The sample was proportionately similar to the class-at-large in relation to the number of minority and non-minority students.

The VSA advises institutions to follow assessment publisher guidelines for determining the appropriate number of students to test. In the absence of publisher guidelines, the VSA provides sample size guidelines for institutions based on a 95% confidence interval and 5% margin of error. So long as the tested sample demographics represent the student body, this means we can be 95% certain that the "true" population learning outcomes are with +/- 5% of the reported results. For more information on Sampling, please refer to the Research Methods Knowledge Base

The increase in learning on the performance task is at or near what would be expected at an institution testing students of similar academic abilities.

The increase in learning on the selected-response questions is at or near what would be expected what would be epxected at an institution testing students of similar academic abilities.

Seniors Detail

The charts below show the proportion of tested seniors who scored at each level of the nine subscales that make up the CLA+. The subscale scores range from 1 to 6 with 6 representing a higher or better score. Due to rounding, subscores may not total 100%.

Performance Task
Analysis & Problem Solving
Writing Effectiveness
Writing Mechanics

The table below shows students' mean scores on the three subscales that make up the Selected-Response Questions section of the CLA+. The students subscores are determined by the number of correct responses in each subsection, with those raw numbers adjusted based on the difficulty of the question set the students received. Individual student scores are rounded to the nearest whole number.

Subscale Mean Student Scores
Scientific & Quantitative Reasoning (Range: 200 to 800) 593.0
Critical Reading & Evaluation (Range: 200 to 800) 578.0
Critique an Argument (Range: 200 to 800) 584.0
Freshmen Detail

The charts below show the proportion of tested freshmen who scored at each level of the nine subscales that make up the CLA+. The subscale scores range from 1 to 6 with 6 representing a higher or better score. Due to rounding, subscores may not total 100%.

Performance Task
Analysis & Problem Solving
Writing Effectiveness
Writing Mechanics

The table below shows students' mean scores on the three subscales that make up the Selected-Response Questions section of the CLA+. The students subscores are determined by the number of correct responses in each subsection, with those raw numbers adjusted based on the difficulty of the question set the students received. Individual student scores are rounded to the nearest whole number.

Subscale Mean Student Scores
Scientific & Quantitative Reasoning (Range: 200 to 800) 571.0
Critical Reading & Evaluation (Range: 200 to 800) 553.0
Critique an Argument (Range: 200 to 800) 552.0