Institutional Assessment | Feature
Putting the Focus on Performance
In the push to improve student performance, two universities institute comprehensive, data-driven systems to assess their departments.
- By Dian Schaffhauser
Illustration by James Steinberg
From the excesses of Wall Street to the whole mortgage mess, the issue of accountability has become part of a heated national conversation. It is a debate to which higher education is certainly no stranger. But now, with tuition rates at private and public schools rising precipitously -- and state budgets shriveling -- the cries for increased accountability are growing louder. To add fuel to the fire, several recent media reports have questioned the very value of a college education in light of the bleak job prospects for new graduates.
So how do schools prove their worth -- and constantly improve? How do they answer the questions: What are we doing, why are we doing it, and how are we doing?
One thing is clear: The task of assessing a school's performance can no longer be carried out just within the walls of the office of institutional research. The foundation for student success is laid in departments -- academic and even non-academic -- scattered throughout campus. And in the drive to improve student performance, many believe, each of these areas must be held accountable.
That's a lesson that Barbara Buckner learned in 2007 when she accepted the position of associate provost for assessment and accreditation at Coastal Carolina University (SC), a four-year public institution with 8,700 students, mostly undergraduate. Buckner was hired specifically to help the university prepare for its regional accreditation with the Southern Association of Colleges and Schools. The idea was for Buckner and a newly formed committee to develop a plan to assess the academic colleges.
Coastal Carolina University
Setup: One main campus, 8,700 students, mainly undergraduate
Assessment tool: TEAL Online, developed in-house using open source software, including MySQL, PHP 5.3.2, and XSLT
User base: Every department, academic and non-academic
Take-away: The university's assessment committee doesn't tell departments what goals to measure, but the committee provides training to help people understand what a good assessment looks like. "They know what their goals are," says Barbara Buckner, associate provost for assessment and accreditation.
When she delivered a draft of the plan to the university's new provost, he told her that she needed to include student affairs, too. She reworked the plan, only to have the provost return it again, this time with a request to include all other areas on campus -- international programs, the library, financial aid, the registrar's office, even facilities -- totaling about 50 departments or "units," as they're called on campus. Each unit was then expected to develop a set of goals related to servicing students, and to measure changes related to those goals.
|In "Scanning the Dashboard," learn how dashboard software can help manage a multitude of institutional goals and objectives. |
Buckner's experience is typical of the changing role played by the office of institutional research on campuses nationwide. No longer strictly focused on duties related to accreditation or government reporting demands, the research team increasingly finds itself playing the role of cheerleader, facilitator, and manager of the accountability process campuswide. Its charge: to help the campus community draw the arc between strategic initiatives set in the president's office and activities taking place on the ground. Accountability efforts provide the scorecard.
What Assessment Means
For Buckner, her first task was to establish a framework for assessment. Many of the units at Coastal Carolina were already handing in annual reports, but, Buckner declares, "They were bragging reports: 'These are all the great things we've done this year.' They'd list publications and accomplishments, but they'd never assess anything. They'd never answer, 'Why is it important that that is what we've accomplished?'"
For the units, the first step in the new program was to set broad goals. "Then we asked the units to narrowly define how the goals would be met," explains Buckner, "either through a student learning outcome or objectives." For example, one of the financial aid unit's broad goals is to improve access to education. An objective that feeds into that goal is a comprehensive review of institutional merit-based scholarships to measure the effectiveness of the awarding policy.
The assessment committee quickly discovered that it would need an automated system to collect the data and reports generated by each unit. The university had been using Assessment Plan Composer, an online application created by the University of South Carolina, to gather information specific to student learning. The system was unable to handle the assessment data coming from non-academic areas, though, so the university decided to develop its own electronic repository in-house.
Technology in Education to Advance Learning (TEAL) Online was introduced in fall 2008. Built using open source software, including MySQL, PHP 5.3.2, and XSLT, it includes a template for each academic and administrative unit to guide them through their mission statement, goals, objectives, and data.
Each objective is classified according to one of 15 categories, such as community outreach, customer service, professional development, and research or scholarship. In each case, the category ties back to the student. For example, community outreach might entail service-learning projects or environmental stewardship. As part of the template, the unit must also describe what metrics it uses -- whether a standard norm, a survey given to students, or some other mechanism for measuring results.
During the academic year, each unit collects the data, which is published to TEAL Online. The unit analyzes the results and writes a summary report, typically about a page long. A director or dean is responsible for reading and approving the report, since it serves as the basis for next year's plan and also impacts how university funds will be allocated.
TEAL Online doesn't reflect everything happening on campus at Coastal Carolina, Buckner notes, but it provides an assessment schedule that keeps each unit focused on continuous improvement. It also keeps the university's strategic initiatives front and center for faculty and staff, since the goals and objectives must tie into them.
One practical benefit of the repository is convenience: All data and reports reside in one location and are accessible online by authorized users. This helps Coastal Carolina create reports for outside agencies, such as accreditation organizations, and also provides a measure of accountability. "Everyone on campus benefits from the improvements and deep analysis of activities taking place in all units," says Buckner.
Benefits of Developing In-House
Initially, Buckner thought development of the system would take about a summer. She had another thought in mind, too: to create an application that could be sold to other institutions, similar to WEAVEonline, a commercially available program for assessment and planning. Three years and many iterations of software later, she recognizes the naiveté of those initial beliefs. "Now that we've designed it, unless somebody is going to copy exactly what we're doing, this couldn't be sold," she admits.
While creating TEAL Online has been no picnic, Buckner is adamant that the university could not have found a prepackaged alternative. "You should design your data and reporting system to meet the needs of your institution," she explains. "Our needs are different. We couldn't go to a vendor and ask to do this."
Setting up the system was just the start, however. Next came user training, divvied up over seven workshops, only one of which focused on the assessment system itself. Others taught people how to write a mission statement, a student learning outcome, or, in the case of non-academic units, an objective; how to read the results; and how to create rubrics.
"The job is huge," says Buckner, "because it affects every corner of this campus, and you have to make sure people understand what assessment is." Instituting the program has been a major institutional change: Some units have adapted well to the new system; others haven't.
According to Buckner, the political science program has probably been the most successful at making the transition. The program has been using the ETS Major Field Tests for years, "but never looking at the data," she says. Once the program's faculty actually analyzed the data, they realized that they had a number of students who weren't doing so well. The department heads decided that they needed to make certain courses prerequisite, and instituted some major changes in the curriculum. As a result, students are doing better on their ETS tests, which is one of the metrics that the program now uses for evaluating how well its curriculum meets student needs.
In non-academic units, the student affairs department is leading the way, because the person in charge of the department's TEAL Online work really understands the purpose of assessment. "She pushed that group a little bit this fall to triangulate their data and came up with some good questions to ask around retention," says Buckner. "Instead of just saying, 'We have a retention problem,' she asked, 'Why do we have a retention problem?' and used the data. That's the key."
By focusing on a limited number of metrics and monitoring the impact of program changes, Buckner believes Coastal Carolina is getting better at asking why: "Why are the results the way they are? Why did this unit decide to take a certain action?" The expectation is that the analysis will help move the institution forward, action by action.
A Faculty Demand for Change
At Coastal Carolina, the drive for accountability came from the top down. At National University, a nonprofit institution with 29 campuses mainly in California, the impetus for change came from the faculty itself.
Setup: 28 campuses, 28,000 students, mostly graduate
Assessment tool: Accountability Management System from TaskStream
User base: Every academic program
Take-away: Jack Paduntin, vice president of institutional research and assessment, believes that the structure of an institution plays a major factor in how assessments can be applied. "When a faculty member makes a recommendation on how to improve a particular outcome, the private not-for-profits have the budget flexibility to support that kind of priority," he says. "I talk to a lot of people in public colleges. Although they may have that kind of information, it doesn't guarantee that it's part of the budget allocation or resource prioritization. For us, it's a drive."
National U has about 28,000 mostly graduate students, studying any of 100 programs. For years, assessment technology consisted of Microsoft Word. Each year, the lead faculty members in each academic program would write up a Word document that outlined the learning outcomes planned for the coming year, including information about the type of test that would be used to measure effectiveness and what the target measures would be. The document would be printed out and put into a folder. Every five years, a program review would take place. If the faculty member who wrote the annual reports had left the university, the folder would often be lost and the five-year review would be meaningless.
About four years ago, the faculty began pushing for an alternative. Jack Paduntin, vice president of institutional research and assessment, worked with a group of faculty representatives to select a third-party program to run a pilot in the school of business. The pilot failed miserably -- not because the software was bad, insists Paduntin, but because "the capability of my office wasn't up to the level that could support the institution."
According to Paduntin, the training for the new system consisted of telling the faculty, "This is good to use. Here's your login and passcode. Go ahead and do it." Drily, he adds, "It doesn't work that way." A large group training effort turned into a giant complaint session. "When you have 50 people in a room, everybody has their own problems," he notes. "The problems would never end."
Armed with lessons from that failure, his office revisited the issue of assessment the following year and took a different approach. While there was nothing wrong with the application that they had tried the previous year, it now had a tainted reputation. Rather than battle to save the program, Paduntin brought in an alternative: Accountability Management System from TaskStream. Given how the first pilot project had crashed and burned, Paduntin's decision to deploy the new system across all 100 programs in the university, with a three-phase approach to training, was gutsy. "We were confident that our new approach to training would have a much better impact on the overall program's success," Paduntin says. And he was right.
The first phase was a repeat of the big group meeting, but this time it consisted of an explanation about how the software had been chosen and what it offered. At the meeting, Paduntin also announced that his staff would visit faculty in small group sessions. And rather than have each faculty member set up the software for his department, Paduntin's team took care of it instead.
"We built each program a home," he explains. "We handled anything that was labor intensive, anything that involved typing, any information we had access to from the catalog or other sources. Then we introduced faculty to the system one-on-one. We walked them through it and told them, 'This is your house. You can move the furniture anywhere you want. This is the frame we built for you.' We didn't just give them login information."
As a result, the faculty felt more comfortable with the software and could focus on what mattered to them: learning outcomes and program assessment. "They didn't have to worry about secretarial work or other things not critical to them," says Paduntin.
The third phase of training brought together for additional help any faculty members who were still having problems. During that first year, Paduntin's office did a total of 130 training sessions.
A Facebook for Assessment
TaskStream provides a repository for assessment information by using a workspace design. Each program has its own workspace, in which the program's faculty members collaborate. "It's a Facebook for assessment. I can have 10 people access my space and I can create a topic in such a way that it's organized for my team to understand assessment," Paduntin says. "Faculty get a little more excited doing that, because they don't need to have an office meeting. They can talk about assessment at any time. That's the beauty of it."
Most National U programs have between 10 and 12 learning outcomes. In the Education Specialist Credential program, for example, one learning outcome is "Understand current laws." For each planning cycle, the program decides how many outcomes will be assessed and how that assessment will take place. Faculty members then gather that information from their classes and feed it to the lead faculty member through TaskStream. The lead member compiles and summarizes the data, which in turn helps the faculty improve the curriculum and make a case for their budgets.
During an annual review, a council of faculty members evaluates each program. A major part of that review is examining the validity of the assessment. Only after the assessment passes muster does it go through the budgeting process.
Accurate assessment can play a key role in effective budgeting, says Paduntin. If a faculty member says his students don't learn because a lab is too old, for example, the assessment results can support a request for upgrading the lab. In the four academic cycles during which TaskStream has been in use, National U has allocated about $800,000 specifically for improvements based on assessment results.
The university administration can also monitor the materials in the repository to learn who's on track with the planning work and who might need an e-mail reminder.
Each year, the university holds an assessment summit, at which faculty share their experiences. Paduntin feared that faculty would object to using the data maintained by TaskStream because it would expose problems in their programs. The opposite has happened. "Faculty have always wanted to do a good job," he says. "They do program improvements all the time, but they might not be good about showing it." Now, notes Paduntin, they take pride in showing how they're going to help students learn.
Paduntin has no doubt that these across-the-board assessment efforts are helping National U improve student learning. At the same time, they also make his job of meeting accreditation requirements much easier. National U is accredited by the Western Association of Schools and Colleges, but it also has another 18 accreditations for particular programs and schools. "Those accrediting agencies change their requirements over time," says Paduntin. "We need to be very current with them, and the technology we use to help us with that needs to be as supportive as possible."
And Paduntin feels that those assessment requirements will only grow more stringent over time. Referring to the federal government's recent moves to more closely legislate the operations of for-profit, publicly traded companies in the education field, Paduntin anticipates a day when private, not-for-profit schools -- such as his -- could come under similar scrutiny.
"We have very good assessment results, but we might be asked by accrediting agencies to publish all those learning outcome results," he cautions. "It's not a requirement now, but it might be. We always need to have that foundation to be ready for that type of compliance requirement. If the software we select can actually answer those different requirements, we want to use it."