Monday, June 27, 2011

Academic Assessment Googled

Instructors continue to stress to students that internet research can be dangerous when it comes to finding accurate information – but we are all guilty of looking to Google or Bing for quick answers to questions. With my professional life devoted to academic assessment technologies, I recently indulged my curiosities about what the Google-found definition of academic assessment and therefore what the public’s perception of the term may be.

I was pleasantly surprised to come across a great definition of academic assessment in the first Google entry from Skidmore College, Saratoga Springs, New York (http://cms.skidmore.edu/assessment/FAQ/what-is-assessment.cfm):

Tom Angelo once summarized it this way: "Assessment is an ongoing process aimed at understanding and improving student learning. It involves making our expectations explicit and public; setting appropriate criteria and high standards for learning quality; systematically gathering, analyzing, and interpreting evidence to determine how well performance matches those expectations and standards; and using the resulting information to document, explain, and improve performance. When it is embedded effectively within larger institutional systems, assessment can help us focus our collective attention, examine our assumptions, and create a shared academic culture dedicated to assuring and improving the quality of higher education."

Further exploration of the Skidmore College site, exposed a treasure trove of assessment resources including links to other institutional webpages and tools. These tools are even organized by discipline. With any research, one source often prompts the researcher to ask additional or different questions – so I continued my hunt for academic assessment resources by returning to Google with some of these questions. To start, who is Tom Angelo? Again, I was pleasantly surprised to be directed to a single webpage with a biography, recent workshop materials and video for a conference presentation by Dr. Angelo (http://eerc.wsu.edu/events/angelo/index.shtml). Additional Dr. Angelo writings revealed other superstar names in assessment, well-known assessment publications and national organizations for assessment, including Dr. Peter Ewell, Change magazine, and The National Center for Academic Transformation.

To bring this all back together, I will quote Dr. Angelo: “The only reason to do assessment is to improve the thing we care about.” He also mentions that assessment practices are often unsuccessful on campuses because they are piecemeal across institutions. These concepts are most certainly also applied to our online search practices. The AEFIS Team seeks to minimize these piecemeal efforts by bringing students, faculty, administrators, alumni and industry to one platform for assessment processes using the AEFIS Solution AND to bring assessment planners to one discussion through the AEFIS Assessment Collaborative.
Becky Joyce, AEFIS Team

Monday, June 13, 2011

"Assessment Disconnect"

AEFIS Response:
The article from early last year, “Assessment Disconnect,” received a great deal of negative feedback in a matter of one day. The negativity honed in on the ineffectiveness of assessment in higher education and the lack of evidence that there is to support actively pursuing assessment data in institutions. Academic freedom and diversity were brought to question in many of the article’s comments – regarding the idea that assessment and accreditation are synonymous with standardization.

Most accrediting agencies provide broad learning goals with minimal direction on instruction or means for assessment. Such vagueness invites institutions, programs, and even individual instructors to develop curriculum freely and to find creative means for students to attain high level goals. Additionally, collecting data against these goals provides perspective on the effectiveness of student learning and sheds light on areas that need to be reinforced for success in related career fields. Students attend institutions to work toward career goals and most seek employment related to their fields after earning their degrees. This statement is applicable to engineering, medical, philosophy, design, performance arts…all students! Thus, assessment must be applicable to all students and any discipline. This conclusion opens a new question, instead of whether or not to assess – how to assess.

Finding methods for assessment is similar to developing strong instructional methods. Practices should reinforce the mission and values of the institution. There are great opportunities for the development of best practices through collaboration. And, there are many assessment conferences annually that invite institutions to share ideas and brainstorm means for growth and improvement. Check out www.goAEFIS.com/events to learn more about many of these conferences and how to get involved.
Becky Joyce, AEFIS Team

Tuesday, May 24, 2011

"Model of the Moment"

By: Steve Kolowich

AEFIS Response:
Competency-based models in higher education cater to the fast-paced attitudes of many of today’s students to get a degree and get into the workforce, specifically part-time students. However, many criticize this model for its limits in providing students with learning experiences outside of their fields and technical skillsets, going so far as to say that the competency based model “is not a college education.”

How does a competency-based model compare to the implementation of a strong assessment plan with student learning outcomes? The difference is the application of data –in a competency model, students are awarded credit based on successful completion of competency tests after preparing individually without an instructor or structured course – while student learning outcomes performance data is collected in supplement to grades to understand student learning and achievement. The focus is removed from effective student learning in an effort to save customers, students, time and money. Institutional development shifts from curriculum design to test design to ensure that graduates have suitable credentials to obtain a degree. Are these methodologies separate, but equal?

Regular student interaction, whether in person or online, and structured instruction benefit students’ learning and opportunities for collaborative educational experiences. And the collection of assessment data over time benefits both the understanding of students’ development and institutional programmatic development as it relates to workforce professions over time. While competency-based models provide great convenience to part-time or returning students, they do not offer “18-year-old [freshmen] or 20-year-old community college student[s the opportunities] to really do well and get a degree.” And convenience is not the vehicle for a strong education.

Measuring competencies or student learning outcomes in addition to grading traditional assignments demonstrates to students what has been learned over the course of curricula and provides mechanisms for practicing necessary skillsets.
Becky Joyce, AEFIS Team

Monday, May 2, 2011

"I Won’t Mess with Your Course if You Don’t Mess with Mine"

Faculty Focus
By: Maryellen Weimer
LINK TO ARTICLE


AEFIS Response:
Students do not generally select to attend a university to take one specific course. Students, instead, enter a university to obtain a degree. As described in “I Won’t Mess with Your Course if You Don’t Mess with Mine,” faculty often do not recognize the obligations and opportunities to understand their courses in the context of a curriculum. And Gerald Graff provides a reasonable strategy for minimizing such “courseocentricisim,” outcomes based assessment.

By developing course outcomes or objectives, institutions can structure a baseline for the instruction of each of their offered courses. This baseline opens the dialogue among all stakeholders, students, faculty, and administrators, to develop meaningful curricula and plans of study. Additionally, outcomes based assessment provides a platform for:
  • Context of student learning in pre-requisite and curricular courses.
  • Networking with multi-disciplinary instructors to facilitate research activities.
  • Options for students to develop career specific degree programs.
  • Academic freedom, by structuring goals for courses, but not specifying means for instruction.
Outcomes based assessment is a holistic approach to the educational process. It addresses the root of education: effective teaching and learning, by providing a metric to measure students' understanding and application skills.

Our web-based assessment management solution, AEFIS, organizes outcomes as they relate to course sections, courses, programs, departments, units and institutions to automate processes for measuring and reporting on student outcomes performance.
Becky Joyce, AEFIS Team

Monday, April 25, 2011

"Why Are We Assessing?"

By: Linda Suskie
AEFIS Response:
When approaching the tasks associated with outcomes-based assessment in higher education, we tend to skip the why and jump right into the how. And to take that jump yet another leap further, we begin to facilitate the how without defined goals that can be communicated to the stakeholders involved. Linda Suskie, Vice President of the Middle State Commission on Higher Education, captures these sentiments in her "Why Are We Assessing?" view. She urges those involved in the assessment of learning in higher education to recognize the succinct goals for assessment. "Assessment is simply a vital tool to help us make sure we fulfill the crucial promises we make to our students and society."

The promises that Suskie remarks on are in question by government policymakers, investors, accrediting agencies, and students and their parents, the consumers of higher education. So, how do we (a) define expectations and (b) demonstrate that we are delivering the expected outcomes for any given student to society?

AEFIS uses the Course Section Dashboard as the platform for faculty to develop effective contracts with students and other stakeholders. This document presents the what and why that is expected of students as form of course outcomes and professional contribution. These contractual documents are more commonly known as course syllabi. They aim to answer to more than what students are getting for their money, but also how they will be able to understand concepts and apply skills as they enter the workforce.

Telling students and stakeholders about the expected outcomes is one thing, but following through is another. This is where direct and indirect assessment come into play. The AEFIS Solution hosts mechanisms for entering and archiving student assessment data. With an end result of student outcomes transcripts -- students walk away from degree programs with more than diplomas, but documented evidence of their performance on outcomes.

This data is most valuable to students, but is invaluable to employers, government policymakers, accrediting agencies and the like. With this information we can start to explain the means for achieving effective personalized learning. Suskie challenges that "…we haven't figured out a way to tell the story of our effectiveness in 25 words or less, which is what busy people want and need…Today people want and need to know not about our star math students but how successful we are with our run of the mill students who struggle with math." We accept the challenge!

Effective learning starts with an understanding of expectations and progresses with continuous communication, evaluation, and revision of those expectations.

Interactive archival systems such as AEFIS serve as vehicles for effective instruction, by connecting assessment to teaching and learning.

Becky Joyce, AEFIS Team