When is a satisfaction assessment complete?
That’s actually a trick question. I would suggest that the process for assessing student satisfaction is ongoing and includes not only administering a survey instrument to your students, but the time involved with reviewing and sharing the results as well. It also includes exploring what the data really tell you about the student experience and then taking action to make improvements and communicating back to students what you have done and why.
By then, of course, it is time to survey your students again to see if you have been able to improve student satisfaction year to year (either annually or every other year, depending on the cycle that works best for you). This is a topic that I have covered in past blogs. (See “How can you turn college student satisfaction data into action planning,” “Taking college satisfaction data beyond the institutional research office,” and “Six critical steps to conducting regular assessment of college student satisfaction.”)
While I was thinking about this question, the recent Winter Olympics also got me thinking about gold, silver, and bronze levels of performance. Even though bronze and silver medals signify high achievement, there is still incentive to go for the gold to be truly on top of the game.
The same goes for campuses and satisfaction assessment. Based on my work with thousands of colleges, universities, community colleges, and career schools over the past two decades, I have identified the following activities that illustrate the levels of activity a campus undertakes once it receives its satisfaction survey results. Again, just conducting a satisfaction assessment is a step in the right direction, but to be at the very top, you may want to consider these types of activities for the future.
Above, early findings from Noel-Levitz’s forthcoming 2014 National Freshman Attitudes Report and its Race/Ethnicity Addendum (to be released in spring 2014) indicate strong interest in receiving career counseling among today’s entering undergraduates, led by students of color.
An area of increasing importance to student retention and college completion, career counseling—and the effectiveness of academic advising related to career discernment—can make a substantial impact on incoming students’ desire and motivation to continue their education.
Continue reading “Entering college students’ interest in career counseling runs high, especially among students of color” »
The private for-profit college sector has been responding to the proposed “gainful employment” rules for the last few years. Government requirements suggest that for-profit and community colleges must meet debt-to-loan standards for their career-related programs. For-profit institutions have had to address their business operations and graduate outcomes in response to the proposed rules. These proposed rules have had an overwhelming impact on strategic planning, operations, fiscal issues, school reputation, and marketing.
How have for-profit institutions successfully met these requirements? By assessing their students and graduates, guiding their planning with that data, and increasing their accountability to students and accreditation bodies. I call these institutions the “haves” of student enrollment management and satisfaction assessment. Their example can guide other for-profit institutions in responding to the changes set forth by the gainful employment rules.
In order to make wise organizational decisions, for-profit institutions need to leverage reliable, valid, and consistent data on enrollment practices and student and graduate satisfaction. It is also important for institutions to dissect the information they have, in order to analyze student satisfaction levels from new student enrollment through graduation and beyond, as well as by educational program. It is this “slicing and dicing” of data that can uncover key areas of concern and suggest solutions for meeting challenges to student satisfaction. This is why it is crucial for student assessment to delve into every relevant educational, financial, and institutional issue that can impact the overall student experience at for-profit campuses. A thorough assessment process can suggest innovations that produce substantial improvement in student satisfaction and educational outcomes. In fact, assessment helps institutions avoid the trap of doing the same things over and over that lead to stagnation and undermine institutional quality. Strategically aligned advances based on valid, reliable, and consistent data allow institutions to change, create, and innovate their operations.
Reliable student assessment data not only allow for-profit institutions to plan more strategically, data can be used to demonstrate campus progress toward goals for student satisfaction and institutional effectiveness. National and regional accrediting bodies assess an institution’s ability to evaluate its effectiveness through the knowledge and planning of student perceptions, persistence, and graduate outcomes. The first step of institutional effectiveness is identifying opportunities for improvement. Having reliable, valid, and consistent data can provide year-over-year indicators for improvement of educational quality, student services, and enrollment practices. (For a quantitative look at how assessment impacts these areas in the for-profit sector, see “The Value of Student Satisfaction Assessment at For-Profit Higher Education Institutions.”)
Consequently, a lack of valid, reliable, and consistent data—or the lack of a process to act on those data—can lead to misguided institutional decisions. The resulting lack of knowledge undermines institutional effectiveness and prevents truly strategic planning. This absence of data or actionable processes can affect enrollment practices and diminish persistence and graduate outcomes. Worse, these effects can be compounded year after year if the campus continues to operate ineffectively without knowing why. Single assessments, where a campus conducts an assessment once and does not follow up again in a consistent and timely fashion, can also produce false intelligence about an institution’s strengths and challenges. Simply put, these “have not” institutions are doing a disservice to their students and will be poorly positioned to meet marketplace expectations and federal requirements.
The purpose of gainful employment rules is to ensure that institutions are serving the needs of their constituents. The best way for institutions to follow those requirements is by embracing the data-rich, action-oriented approach of the “haves” of quality enrollment practices and assessment. This leads to innovation, efficiency, accountability, and most importantly, a student body that feels it is being served well and is making a wise investment in their postsecondary education.
If you have any questions about how you can become more of a “have” for-profit institution—assessing students, planning strategically, and meeting accreditation requirements—please email me. I also encourage you to attend the free webinar How to Assess Student Satisfaction and Priorities, which will illustrate many of the strategies and benefits of student assessment.
As we have hunkered down during weeks of intense cold, I couldn’t help but remember the travel I did this fall to several warmer climates (Virginia Beach, Norfolk, and Anaheim) for three very different, yet similar conferences.
These conferences were each convened for three different audiences:
These events differed greatly in terms of size, audience, and focus. The Educause conference was by far the largest of the three, hosted in the Anaheim Convention Center and sporting a massive exhibit hall with many vendor-specific learning labs. (The swag was far better than the pens and note pads my kids typically receive upon my return home!)
However, they all had the same goal: for participants to share and explore new ways for tackling the challenge of student success and completion. In fact, the urgent need to increase student retention and completion was palpable at all three events. Those from the Virginia Community College System are motivated by an aggressive statewide strategic plan. The attendees of SACSA are working under increased accountability brought on by performance-based funding and economic development drivers. The attendees at Educause have the desire to use technology to contain and reduce costs, as well as to enhance student learning outcomes.
Participants at each event were intensely engaged and motivated to take away new ideas and strategies back to their own campus. A few strategies and ideas really resonated with me:
Continue reading “Three student retention strategies that have impressed me at recent conferences” »
My son Christian is in his sophomore year of high school. This past November, he received his first three search letters from colleges—all on the same day. While the envelopes were different colors, the letters were very similar in content, offer, and approach. He commented that two of the three letters were “exactly the same.”
That’s not what a campus wants to hear when trying to capture the attention of a prospective student in this increasingly competitive higher education environment. In fact, watching Christian’s reactions to these search communications made me think about the kind of pitfalls that campuses can encounter when creating their direct marketing campaigns. I’ve been involved in higher education marketing for nearly 25 years, consulting with campuses of all different sizes, types, and missions. Time and again, I have seen an institution’s hard work undermined by correctable mistakes. Here are seven of the most common ones I have seen, and suggestions on how you can avoid making them.
Using old lists—and three months is old in the fast-moving world of college direct marketing—increases your chances of getting student names that have already been pulverized with marketing communications from campuses. Think about the way we get offers for things such as credit cards. With each solicitation you get, you are less likely to respond, especially by the time the ninth or tenth offer hits you. It’s no different for students.
Suggestion: Purchase new names that have been recently cultivated from your list provider, so you get fresh prospects who will be more receptive to your marketing messages.
Just because everyone takes the ACT or SAT does not mean you should rely only on those names for your list purchase. No single vendor has every name for a given market or territory. Plus different vendors have different data and variables that you can use.
Suggestion: Diversify your list purchases, pulling in names from a variety of vendors that match specific enrollment needs or recruitment criteria. For example, ACT and College Board allow you to purchase by score range and academic information. NRCCUA offers a variety of student specific information that may be important to an institution like extracurricular interests and religious affiliation.
If you work in enrollment management, it is always tempting to focus all of your work on the next incoming class. Are you getting enough inquiries? How many applications have we received? Will we make our goals next fall?
While these top-of-mind questions are important for the near term—indeed, you may need to focus most of your attention there depending on how things are going—you know the reality is that after you bring in the next class, you will face the same near-term demands all over again. With the next class. The class after that. And the class after that. The near-term-mania never goes away.
Instead of only focusing on the immediate, today’s savvy enrollment managers strive to carve out time where it pays off most: building up strategic, foundational elements of an enrollment program that will pay off for many years to come.
Of course, there are many productive avenues toward this goal, but I am going to share two ways that Noel-Levitz considers to be especially important.
Let’s say you want to raise enrollment by 5 percent within three years. You could begin by striving to generate 15 percent more inquiries. Or you could begin by striving to gradually increase the inquiry-to-applicant conversion rate each year by concentrating on inquiries who have a greater chance of enrolling so you end up with the higher number of matriculants you need. The big advantage of the latter approach is that improving conversion rates has a longer-lasting effect, i.e., improving the processes and systems that move these metrics will affect many future years of prospective students.
The same logic works on later-stage metrics, such as the applicant acceptance rate for students who complete their applications, the accepted-to-enrolled yield rate, or retention rates such as the census-day-to-census-day persistence rate from term one to term two. In fact, depending on your situation, improving later-stage metrics may have the potential to produce even greater dividends than reaching the goals for early-stage conversions.
In response to calls for raising higher education graduation rates, the latest data from the U.S. Department of Education indicate that certain sectors are, in fact, showing some signs of improvement.
Last month, the Department of Education’s National Center for Education Statistics released the following new graduation rate data from IPEDS, the Integrated Postsecondary Education Data System:
(Important to note: IPEDS doesn’t track completion rates for less-than-full-time students or for transferring or returning/interrupted-attendance students. For example, it excludes students who didn’t graduate from the community college they started at but who may have transferred to a four-year college.)
There has been a lot of media attention lately in the higher education community about FAFSA position codes. Unfortunately, at least a couple of these articles had sensationalized headlines implying that this very important data was being misused by schools, against students.
First of all, what are we talking about? On the paper FAFSA application, students are given the option to indicate up to four colleges where they wish to have the FAFSA data sent. On the electronic application, they can include 10 institutions.
While the text on the FAFSA does not ask the student to rank the student’s choice (i.e., which school is the student’s first-choice institution), the order a student lists an institution does appear to correlate to a student’s interest in that college or university. Consider the following results from campuses using Noel-Levitz financial aid services:
In an analysis of 153 of our campus partners, students enrolled at a 64 percent rate at the campuses listed first on their FAFSA. The yield dropped to 22 percent in position two and to 16 percent in position three. Students yielded at about the same rate in the remaining positions as those who did not file a FAFSA at all (12 percent). These trends were similar for both public and private colleges/universities.
Continue reading “Using FAFSA data ethically and strategically for enrollment management” »
College transfer students have been a significant yet understudied student population. Thankfully, recent studies have uncovered valuable findings on transfer students.
In July 2013, the National Student Clearinghouse Research Center released Baccalaureate Attainment: A National View of the Postsecondary Outcomes of Students Who Transfer from Two-Year to Four-Year Institutions. The report, which tracked more than 230,000 students, included some informative data about students transferring from two-year institutions to four-year institutions:
Benchmarking yourself against these data and sharing them with institutional constituents pushing for higher completion rates, greater accountability, and affordability could inform the completion agenda, partnerships, articulation agreements, advising, and student support services on your campus.