Wednesday, August 3, 2011

Well Informed Customers Serve Institutions Best

There are many factors that go into high school students’ college selection processes. College Prowler (www.CollegeProwler.com), a website dedicated to sharing students’ statistics on universities around the country, breaks the factors into twenty defining characteristics. However, academics are only at the top of the list because of alphabetical organization. Yet, students are hitting the nail on the head when it comes to what information they want to know about academics – relevance, effectiveness, and flexibility.

Does this mean that collecting quality assurance data should be left up to college marketing personnel – to be used only to attract prospective students? No, it means that there are opportunities for data use across an institution. Current university students’ voices are carried to prospective students – they are the primary source of information for both students and parents who are looking at higher education institutions. Thus, it is important that students are aware of quality assurance efforts at their schools and get involved in the process through regular course evaluations. But, students also need to be aware of assessment data that is collected in the classroom and how it is used to improve programs.

How can students realistically be involved in these processes? As institutions develop transparent means to collect assessment data, it can and will be more easily shared across institutional levels. This includes online course evaluations with immediately portable data. Assessment data recorded online can also be immediately portable and easily manipulated for various audiences including students. This information would be most appropriately shared with students through advising processes. Also, publishing feedback from alumni can be most valuable for institutions, faculty, current and prospective students.

The customer is always right is not an appropriate model for higher education – but well informed students, who understand institutional feedback loops, make great customers who share positive experiences with prospective customers – the next generation of [tuition paying] students.
Becky Joyce, AEFIS Team

Monday, July 18, 2011

"Teaching Them How to Think"

By: Dan Berrett

AEFIS Response:
Pat Hutchings, senior associate at the Carnegie Foundation for the Advancement of Teaching, was quoted in the recent Inside Higher Ed article “Teaching Them How to Think” by Dan Berrett. Our Team shares his sentiments:

“’Assessment means asking if students are learning what I think I'm teaching,’ said Hutchings. ‘My sense is that what we need to think about now is how faculty can take back assessment. It's been possessed by others, if you will.’”

If you have met with our Team to discuss assessment, you have heard our two mechanisms for measuring student learning:

1. Brain surgery – really opening things up and seeing what students are understanding,

2. Organizing direct and indirect measurements to:

        A. Survey students on what they think they are learning,

       B. Ask students direct questions on specific topics and evaluation results,

      C. Set outcomes and performance criteria,

     D. Survey graduates on job searches and career successes,

     E. Survey faculty on what they think they are teaching.


Our Team recommends the latter because the first can get messy. There is proven success that this sort of data collection can be used for continuous course and programmatic improvement – however, Hutchings notes an important point in the feedback loop. Instructors are the link between students and institutions – they are the faces of higher education. Thus, it is important for faculty to embrace and receive clout for innovating assessment practices.

Focus on curriculum vitae for promotion and publication processes is so great that there is room for valuable involvement and impressive experience in assessment and efforts to improve student learning in higher education. Consider the multitude of assessment conferences with opportunities for publication annually and the environments for assessment discussion and research.

When next making updates to your CV – take a second look at those few areas that you have room for growth: department service, major administrative responsibilities -- and find ways to work with your departments to get involved in assessment.

Becky Joyce, AEFIS Team

Monday, June 27, 2011

Academic Assessment Googled

Instructors continue to stress to students that internet research can be dangerous when it comes to finding accurate information – but we are all guilty of looking to Google or Bing for quick answers to questions. With my professional life devoted to academic assessment technologies, I recently indulged my curiosities about what the Google-found definition of academic assessment and therefore what the public’s perception of the term may be.

I was pleasantly surprised to come across a great definition of academic assessment in the first Google entry from Skidmore College, Saratoga Springs, New York (http://cms.skidmore.edu/assessment/FAQ/what-is-assessment.cfm):

Tom Angelo once summarized it this way: "Assessment is an ongoing process aimed at understanding and improving student learning. It involves making our expectations explicit and public; setting appropriate criteria and high standards for learning quality; systematically gathering, analyzing, and interpreting evidence to determine how well performance matches those expectations and standards; and using the resulting information to document, explain, and improve performance. When it is embedded effectively within larger institutional systems, assessment can help us focus our collective attention, examine our assumptions, and create a shared academic culture dedicated to assuring and improving the quality of higher education."

Further exploration of the Skidmore College site, exposed a treasure trove of assessment resources including links to other institutional webpages and tools. These tools are even organized by discipline. With any research, one source often prompts the researcher to ask additional or different questions – so I continued my hunt for academic assessment resources by returning to Google with some of these questions. To start, who is Tom Angelo? Again, I was pleasantly surprised to be directed to a single webpage with a biography, recent workshop materials and video for a conference presentation by Dr. Angelo (http://eerc.wsu.edu/events/angelo/index.shtml). Additional Dr. Angelo writings revealed other superstar names in assessment, well-known assessment publications and national organizations for assessment, including Dr. Peter Ewell, Change magazine, and The National Center for Academic Transformation.

To bring this all back together, I will quote Dr. Angelo: “The only reason to do assessment is to improve the thing we care about.” He also mentions that assessment practices are often unsuccessful on campuses because they are piecemeal across institutions. These concepts are most certainly also applied to our online search practices. The AEFIS Team seeks to minimize these piecemeal efforts by bringing students, faculty, administrators, alumni and industry to one platform for assessment processes using the AEFIS Solution AND to bring assessment planners to one discussion through the AEFIS Assessment Collaborative.
Becky Joyce, AEFIS Team

Monday, June 13, 2011

"Assessment Disconnect"

AEFIS Response:
The article from early last year, “Assessment Disconnect,” received a great deal of negative feedback in a matter of one day. The negativity honed in on the ineffectiveness of assessment in higher education and the lack of evidence that there is to support actively pursuing assessment data in institutions. Academic freedom and diversity were brought to question in many of the article’s comments – regarding the idea that assessment and accreditation are synonymous with standardization.

Most accrediting agencies provide broad learning goals with minimal direction on instruction or means for assessment. Such vagueness invites institutions, programs, and even individual instructors to develop curriculum freely and to find creative means for students to attain high level goals. Additionally, collecting data against these goals provides perspective on the effectiveness of student learning and sheds light on areas that need to be reinforced for success in related career fields. Students attend institutions to work toward career goals and most seek employment related to their fields after earning their degrees. This statement is applicable to engineering, medical, philosophy, design, performance arts…all students! Thus, assessment must be applicable to all students and any discipline. This conclusion opens a new question, instead of whether or not to assess – how to assess.

Finding methods for assessment is similar to developing strong instructional methods. Practices should reinforce the mission and values of the institution. There are great opportunities for the development of best practices through collaboration. And, there are many assessment conferences annually that invite institutions to share ideas and brainstorm means for growth and improvement. Check out www.goAEFIS.com/events to learn more about many of these conferences and how to get involved.
Becky Joyce, AEFIS Team

Tuesday, May 24, 2011

"Model of the Moment"

By: Steve Kolowich

AEFIS Response:
Competency-based models in higher education cater to the fast-paced attitudes of many of today’s students to get a degree and get into the workforce, specifically part-time students. However, many criticize this model for its limits in providing students with learning experiences outside of their fields and technical skillsets, going so far as to say that the competency based model “is not a college education.”

How does a competency-based model compare to the implementation of a strong assessment plan with student learning outcomes? The difference is the application of data –in a competency model, students are awarded credit based on successful completion of competency tests after preparing individually without an instructor or structured course – while student learning outcomes performance data is collected in supplement to grades to understand student learning and achievement. The focus is removed from effective student learning in an effort to save customers, students, time and money. Institutional development shifts from curriculum design to test design to ensure that graduates have suitable credentials to obtain a degree. Are these methodologies separate, but equal?

Regular student interaction, whether in person or online, and structured instruction benefit students’ learning and opportunities for collaborative educational experiences. And the collection of assessment data over time benefits both the understanding of students’ development and institutional programmatic development as it relates to workforce professions over time. While competency-based models provide great convenience to part-time or returning students, they do not offer “18-year-old [freshmen] or 20-year-old community college student[s the opportunities] to really do well and get a degree.” And convenience is not the vehicle for a strong education.

Measuring competencies or student learning outcomes in addition to grading traditional assignments demonstrates to students what has been learned over the course of curricula and provides mechanisms for practicing necessary skillsets.
Becky Joyce, AEFIS Team