top of page
Rubrics and Scaffolding Across Courses to Improve Learning

Computer Information Systems Undergraduate Program

A true differentiator between Computer Information Systems (CIS) and other similarly technical majors is that CIS emphasizes technology’s human aspect. A major program-wide objective states that CIS graduates can “[a]pply sound analysis and design methodologies toward creating technological solutions for the enhancement and improvement of business processes.” Our dilemma began when a fellow CIS colleague decided to use a student project for the creation of a small piece of software. Since our curriculum emphasizes analyzing client needs and developing a technology solution, he volunteered to be the client with student teams interviewing him to elicit the new software’s requirements.

​

After a week of interviews, he realized that the student teams were not effective at interviewing for requirements elicitation (RE)! He asked colleagues, “Where do we teach interviewing skills in our curriculum?” A class titled Systems Analysis and Design that uses a textbook with a chapter on RE seemed to be the major point-of-contact students had with this topic. This chapter includes six pages of cursory information on interviewing with no actual instruction on how to interview. We realized that nowhere in our curriculum did we actively teach the skill of interviewing. Nor did we assess our students’ ability to perform an effective RE interview, apart from two embedded exam questions in their final semester. As a faculty, we learned that the student teams’ poor performance in the mock interviews suggested that we were not fully meeting this important RE-related learning objective!


Our choice to improve student learning in RE interviews coincided with the university-wide launch of a joint program offered through the Office of Assessment and the university’s Center for Faculty Innovation (CFI). This joint Learning Improvement by Design (LID) program provided the expertise of a CFI learning improvement specialist and an assessment specialist to help us develop curriculum changes, improve learning, and measure performance outcomes.

 

For the baseline year of the LID project we taught RE interviewing using our traditional textbook chapter and class activities, and all student teams performed a RE interview as part a semester-long case study project. Despite how bad the students’ performances were, we video recorded the students’ pre-intervention interviews at the end of the semester.

 

Also during the baseline year, two CIS faculty, two assessment specialists and a learning improvement specialist created an evaluation rubric for a RE interview. The evaluation rubric has eight RE interview factors and was created both to assist in teaching and learning and assess student learning outcomes. We classified each factor into five levels of ability and points were applied: beginning (1), developing (2), competent (3), excellent (4), outstanding experienced professional (5). Using the rubric to evaluate student learning and measure outcomes, a team of faculty assessed the students’ performance on the pre-intervention videos. As expected, the student teams performed poorly in the baseline year with scores ranging from 1.4 to 2.8.

 

The following summer a LID project-related curriculum change was initiated. We engaged eight faculty in the development of learning-improvement activities that scaffolded over eight courses in the CIS curriculum. The activities were stepped to Bloom’s Taxonomy. For example, the first-year course required students to remember and understand topics and terminology. Third-year courses implemented learning-improvement activities that required students to analyze and evaluate RE interviews recorded on video. Finally, fourth-year courses required the student teams to perform a RE interview when completing the semester-long case study. The student teams were assessed with the rubric at the end of the second year, and the mean scores ranged from 2.6 to 3.5, which was a vast improvement.

 

This was our “WOW” moment! We worked program-wide and faculty-wide to successfully increase learning, to meet our established learning objective, and to prepare our students to effectively gather requirements through RE interviews. At the end of the third year, the performance of student teams assessed with the rubric remained relatively the same, ranging from 2.65 to 3.4. Not yet satisfied with our learning improvement, we recently made small revisions to the rubric and created additional learning-improvement activities. Our greatest outcome from the LID project is the realization that we, as a CIS faculty, now have a culture of learning improvement to carry us forward.

​

Additional Context

The Computer Information Sciences program at James Madison University has over 500 majors. James Madison University is classified as a master's college/university with larger programs; JMU is a public institution with 21,000 undergraduates and about 2,000 graduate students.

​

Additional Resources

Fulcher, K.H., Smith, K. L., Sanchez, E. R. H., Ames, A. J., & Meixner, C., (2017). Return of the pig: Standards for learning improvement. Research & Practice in Assessment, 11(2), 10-27.

 

Lending, D., Fulcher, K. H., Ezell, J. D., May, J. L., & Dillon, T. W. (2018). Example of a program-level learning improvement report. Research & Practice in Assessment, 13, 34-50

​

Citation

Rubrics and scaffolding across courses to improve learning: Computer information systems undergraduate program. (March, 2019). Retrieved from https://www.learning-improvement.org/

​

​

​

bottom of page