Monday, October 31, 2011

Getting Over the Hill - Using Mid-Term Course Evaluations Effectively

The middle of the academic semester can bring a slump in students’ attitudes about coursework. Halloween festivities, football season, and eyes set on fall breaks ahead all serve as great distractions from assignments and class participation. Institutions and instructors have opportunities to regain students’ focus and gauge student learning through mid-term course evaluations. Such evaluations provide an outlet for students to comment on the strengths and weaknesses of their courses. This feedback can be used to improve course structure moving forward and to assist faculty in engaging their student audiences. Many institutions and individual instructors use mid-term course evaluations as tools for gathering data to develop a continuous feedback loop. This strategy has proven successful at some institutions in increasing student participation in end of term course evaluations that follow (Enyeart 15).

Using online tools to survey students has demonstrated an increase in written feedback on course evaluation forms. Students are not limited by time constraints that may be associated with in-class paper-based course evaluations. It was documented at one institution that the number of words typed by students for open-text questions on web-based course evaluations was seven times greater than those written on paper-based evaluations (5).

Robert T. Brill, PhD., an associate professor of psychology at Moravian College, shared, in a blog posting at www.facultyfocus.com, his three-option feedback approach. This strategy poses questions about specific course components such as text books, assignments, lectures, etc. He asks his students to respond with one of three options: keep as is, keep but modify, or remove from the course. Such questions ask students to justify their responses – their feedback provides support to make changes in the classroom and, more specifically, what changes to make. It also provides support to continue successful practices in the classroom. Dr. Brill asks questions specific to the student learning outcomes for the course – this brings students’ focus to the educational objectives detailed in course syllabi. He welcomes written responses, but also invites students to participate online through his institution’s learning management system.

The Education Advisory Board’s 2009 study on online student course evaluations referenced in the beginning of this post provides example questions from one of the universities involved in its research efforts:

List the major strengths of this course. What is helping you learn in this course? Please give a brief example.



List any changes that could be made in the course to assist with your learning. Please suggest how changes could be made.

The Director of Assessment at this university noted that “[Using mid-term course evaluations] is a very simple, very easy-to-implement way of telling students that their feedback is valuable to them, and it always, always, always improves end of semester course evaluation response rates.”

Students support mid-term course evaluations because they demonstrate instructor interest in student opinions. Jeff Wojcik, the College of Literature, Science, and the Arts Academic Relations Officer at the University Michigan, wrote an opinion article in the Michigan Daily earlier this year detailing:

Unlike end-of-term evaluations, which can only create improvements for future students, instructors benefit from midterm feedback because they can augment their teaching, if necessary, for students who are currently taking the course. This immediate response can help students learn better and allow professors to adopt a style that best accommodates specific semesters and sections of students. Feedback also allows students to indicate an interest in a relevant political topic, a small change to lecture slides or other suggestions that might not warrant a meeting with a professor.

Learn more from the sources referenced in this post…

Online Student Course Evaluations: Strategies for Increasing Student Participation Rates

Christine Enyeart, Michael Ravenscroft, Education Advisory Board, May 8, 2009
http://tcuespot.wikispaces.com/file/view/Online+Student+Course+Evaluations+-+Strategies+for+Increasing+Student+Participation+Rates.pdf

Faculty Focus

“How to Make Course Evaluations More Valuable”
Robert T. Brill, February 29, 2009

http://www.facultyfocus.com/articles/effective-teaching-strategies/how-to-make-course-evaluations-more-valuable/

Michigan Daily

“More than final feedback”
Jeff Wojcik, February 14, 2011

http://www.michigandaily.com/content/jeff-wojcik-implement-midterm-course-evaluations

Becky Yannes, AEFIS Team

Wednesday, August 3, 2011

Well Informed Customers Serve Institutions Best

There are many factors that go into high school students’ college selection processes. College Prowler (www.CollegeProwler.com), a website dedicated to sharing students’ statistics on universities around the country, breaks the factors into twenty defining characteristics. However, academics are only at the top of the list because of alphabetical organization. Yet, students are hitting the nail on the head when it comes to what information they want to know about academics – relevance, effectiveness, and flexibility.

Does this mean that collecting quality assurance data should be left up to college marketing personnel – to be used only to attract prospective students? No, it means that there are opportunities for data use across an institution. Current university students’ voices are carried to prospective students – they are the primary source of information for both students and parents who are looking at higher education institutions. Thus, it is important that students are aware of quality assurance efforts at their schools and get involved in the process through regular course evaluations. But, students also need to be aware of assessment data that is collected in the classroom and how it is used to improve programs.

How can students realistically be involved in these processes? As institutions develop transparent means to collect assessment data, it can and will be more easily shared across institutional levels. This includes online course evaluations with immediately portable data. Assessment data recorded online can also be immediately portable and easily manipulated for various audiences including students. This information would be most appropriately shared with students through advising processes. Also, publishing feedback from alumni can be most valuable for institutions, faculty, current and prospective students.

The customer is always right is not an appropriate model for higher education – but well informed students, who understand institutional feedback loops, make great customers who share positive experiences with prospective customers – the next generation of [tuition paying] students.
Becky Joyce, AEFIS Team

Monday, July 18, 2011

"Teaching Them How to Think"

By: Dan Berrett

AEFIS Response:
Pat Hutchings, senior associate at the Carnegie Foundation for the Advancement of Teaching, was quoted in the recent Inside Higher Ed article “Teaching Them How to Think” by Dan Berrett. Our Team shares his sentiments:

“’Assessment means asking if students are learning what I think I'm teaching,’ said Hutchings. ‘My sense is that what we need to think about now is how faculty can take back assessment. It's been possessed by others, if you will.’”

If you have met with our Team to discuss assessment, you have heard our two mechanisms for measuring student learning:

1. Brain surgery – really opening things up and seeing what students are understanding,

2. Organizing direct and indirect measurements to:

        A. Survey students on what they think they are learning,

       B. Ask students direct questions on specific topics and evaluation results,

      C. Set outcomes and performance criteria,

     D. Survey graduates on job searches and career successes,

     E. Survey faculty on what they think they are teaching.


Our Team recommends the latter because the first can get messy. There is proven success that this sort of data collection can be used for continuous course and programmatic improvement – however, Hutchings notes an important point in the feedback loop. Instructors are the link between students and institutions – they are the faces of higher education. Thus, it is important for faculty to embrace and receive clout for innovating assessment practices.

Focus on curriculum vitae for promotion and publication processes is so great that there is room for valuable involvement and impressive experience in assessment and efforts to improve student learning in higher education. Consider the multitude of assessment conferences with opportunities for publication annually and the environments for assessment discussion and research.

When next making updates to your CV – take a second look at those few areas that you have room for growth: department service, major administrative responsibilities -- and find ways to work with your departments to get involved in assessment.

Becky Joyce, AEFIS Team

Monday, June 27, 2011

Academic Assessment Googled

Instructors continue to stress to students that internet research can be dangerous when it comes to finding accurate information – but we are all guilty of looking to Google or Bing for quick answers to questions. With my professional life devoted to academic assessment technologies, I recently indulged my curiosities about what the Google-found definition of academic assessment and therefore what the public’s perception of the term may be.

I was pleasantly surprised to come across a great definition of academic assessment in the first Google entry from Skidmore College, Saratoga Springs, New York (http://cms.skidmore.edu/assessment/FAQ/what-is-assessment.cfm):

Tom Angelo once summarized it this way: "Assessment is an ongoing process aimed at understanding and improving student learning. It involves making our expectations explicit and public; setting appropriate criteria and high standards for learning quality; systematically gathering, analyzing, and interpreting evidence to determine how well performance matches those expectations and standards; and using the resulting information to document, explain, and improve performance. When it is embedded effectively within larger institutional systems, assessment can help us focus our collective attention, examine our assumptions, and create a shared academic culture dedicated to assuring and improving the quality of higher education."

Further exploration of the Skidmore College site, exposed a treasure trove of assessment resources including links to other institutional webpages and tools. These tools are even organized by discipline. With any research, one source often prompts the researcher to ask additional or different questions – so I continued my hunt for academic assessment resources by returning to Google with some of these questions. To start, who is Tom Angelo? Again, I was pleasantly surprised to be directed to a single webpage with a biography, recent workshop materials and video for a conference presentation by Dr. Angelo (http://eerc.wsu.edu/events/angelo/index.shtml). Additional Dr. Angelo writings revealed other superstar names in assessment, well-known assessment publications and national organizations for assessment, including Dr. Peter Ewell, Change magazine, and The National Center for Academic Transformation.

To bring this all back together, I will quote Dr. Angelo: “The only reason to do assessment is to improve the thing we care about.” He also mentions that assessment practices are often unsuccessful on campuses because they are piecemeal across institutions. These concepts are most certainly also applied to our online search practices. The AEFIS Team seeks to minimize these piecemeal efforts by bringing students, faculty, administrators, alumni and industry to one platform for assessment processes using the AEFIS Solution AND to bring assessment planners to one discussion through the AEFIS Assessment Collaborative.
Becky Joyce, AEFIS Team

Monday, June 13, 2011

"Assessment Disconnect"

AEFIS Response:
The article from early last year, “Assessment Disconnect,” received a great deal of negative feedback in a matter of one day. The negativity honed in on the ineffectiveness of assessment in higher education and the lack of evidence that there is to support actively pursuing assessment data in institutions. Academic freedom and diversity were brought to question in many of the article’s comments – regarding the idea that assessment and accreditation are synonymous with standardization.

Most accrediting agencies provide broad learning goals with minimal direction on instruction or means for assessment. Such vagueness invites institutions, programs, and even individual instructors to develop curriculum freely and to find creative means for students to attain high level goals. Additionally, collecting data against these goals provides perspective on the effectiveness of student learning and sheds light on areas that need to be reinforced for success in related career fields. Students attend institutions to work toward career goals and most seek employment related to their fields after earning their degrees. This statement is applicable to engineering, medical, philosophy, design, performance arts…all students! Thus, assessment must be applicable to all students and any discipline. This conclusion opens a new question, instead of whether or not to assess – how to assess.

Finding methods for assessment is similar to developing strong instructional methods. Practices should reinforce the mission and values of the institution. There are great opportunities for the development of best practices through collaboration. And, there are many assessment conferences annually that invite institutions to share ideas and brainstorm means for growth and improvement. Check out www.goAEFIS.com/events to learn more about many of these conferences and how to get involved.
Becky Joyce, AEFIS Team

Tuesday, May 24, 2011

"Model of the Moment"

By: Steve Kolowich

AEFIS Response:
Competency-based models in higher education cater to the fast-paced attitudes of many of today’s students to get a degree and get into the workforce, specifically part-time students. However, many criticize this model for its limits in providing students with learning experiences outside of their fields and technical skillsets, going so far as to say that the competency based model “is not a college education.”

How does a competency-based model compare to the implementation of a strong assessment plan with student learning outcomes? The difference is the application of data –in a competency model, students are awarded credit based on successful completion of competency tests after preparing individually without an instructor or structured course – while student learning outcomes performance data is collected in supplement to grades to understand student learning and achievement. The focus is removed from effective student learning in an effort to save customers, students, time and money. Institutional development shifts from curriculum design to test design to ensure that graduates have suitable credentials to obtain a degree. Are these methodologies separate, but equal?

Regular student interaction, whether in person or online, and structured instruction benefit students’ learning and opportunities for collaborative educational experiences. And the collection of assessment data over time benefits both the understanding of students’ development and institutional programmatic development as it relates to workforce professions over time. While competency-based models provide great convenience to part-time or returning students, they do not offer “18-year-old [freshmen] or 20-year-old community college student[s the opportunities] to really do well and get a degree.” And convenience is not the vehicle for a strong education.

Measuring competencies or student learning outcomes in addition to grading traditional assignments demonstrates to students what has been learned over the course of curricula and provides mechanisms for practicing necessary skillsets.
Becky Joyce, AEFIS Team

Monday, May 2, 2011

"I Won’t Mess with Your Course if You Don’t Mess with Mine"

Faculty Focus
By: Maryellen Weimer
LINK TO ARTICLE


AEFIS Response:
Students do not generally select to attend a university to take one specific course. Students, instead, enter a university to obtain a degree. As described in “I Won’t Mess with Your Course if You Don’t Mess with Mine,” faculty often do not recognize the obligations and opportunities to understand their courses in the context of a curriculum. And Gerald Graff provides a reasonable strategy for minimizing such “courseocentricisim,” outcomes based assessment.

By developing course outcomes or objectives, institutions can structure a baseline for the instruction of each of their offered courses. This baseline opens the dialogue among all stakeholders, students, faculty, and administrators, to develop meaningful curricula and plans of study. Additionally, outcomes based assessment provides a platform for:
  • Context of student learning in pre-requisite and curricular courses.
  • Networking with multi-disciplinary instructors to facilitate research activities.
  • Options for students to develop career specific degree programs.
  • Academic freedom, by structuring goals for courses, but not specifying means for instruction.
Outcomes based assessment is a holistic approach to the educational process. It addresses the root of education: effective teaching and learning, by providing a metric to measure students' understanding and application skills.

Our web-based assessment management solution, AEFIS, organizes outcomes as they relate to course sections, courses, programs, departments, units and institutions to automate processes for measuring and reporting on student outcomes performance.
Becky Joyce, AEFIS Team

Monday, April 25, 2011

"Why Are We Assessing?"

By: Linda Suskie
AEFIS Response:
When approaching the tasks associated with outcomes-based assessment in higher education, we tend to skip the why and jump right into the how. And to take that jump yet another leap further, we begin to facilitate the how without defined goals that can be communicated to the stakeholders involved. Linda Suskie, Vice President of the Middle State Commission on Higher Education, captures these sentiments in her "Why Are We Assessing?" view. She urges those involved in the assessment of learning in higher education to recognize the succinct goals for assessment. "Assessment is simply a vital tool to help us make sure we fulfill the crucial promises we make to our students and society."

The promises that Suskie remarks on are in question by government policymakers, investors, accrediting agencies, and students and their parents, the consumers of higher education. So, how do we (a) define expectations and (b) demonstrate that we are delivering the expected outcomes for any given student to society?

AEFIS uses the Course Section Dashboard as the platform for faculty to develop effective contracts with students and other stakeholders. This document presents the what and why that is expected of students as form of course outcomes and professional contribution. These contractual documents are more commonly known as course syllabi. They aim to answer to more than what students are getting for their money, but also how they will be able to understand concepts and apply skills as they enter the workforce.

Telling students and stakeholders about the expected outcomes is one thing, but following through is another. This is where direct and indirect assessment come into play. The AEFIS Solution hosts mechanisms for entering and archiving student assessment data. With an end result of student outcomes transcripts -- students walk away from degree programs with more than diplomas, but documented evidence of their performance on outcomes.

This data is most valuable to students, but is invaluable to employers, government policymakers, accrediting agencies and the like. With this information we can start to explain the means for achieving effective personalized learning. Suskie challenges that "…we haven't figured out a way to tell the story of our effectiveness in 25 words or less, which is what busy people want and need…Today people want and need to know not about our star math students but how successful we are with our run of the mill students who struggle with math." We accept the challenge!

Effective learning starts with an understanding of expectations and progresses with continuous communication, evaluation, and revision of those expectations.

Interactive archival systems such as AEFIS serve as vehicles for effective instruction, by connecting assessment to teaching and learning.

Becky Joyce, AEFIS Team

Thursday, April 21, 2011

Assessing our Assessing

At our recent ABET Symposium workshop, we asked participants to self-assess their programs' assessment practices. Similarly to any classroom setting, there were a few partakers who were quick to share their institutional efforts and several who shied away from the questions. The group managed to come up with some interesting topics of discussion and open ended questions for their faculty and administrators.

Much of the room admitted that their curriculum mappings and student assessments are planned and warehoused on paper. For most of them, this results in boxes and boxes of hard copies and hundreds to thousands of man-hours for the preparation of an ABET accreditation visit. Although the room had representatives from many different schools, several individuals described analysis packages developed for assessment data. There was no conversation of shared tools or practices. The conference, itself, is meant to be a forum for cross-pollination of ideas and best practices, however, the schools remain trapped in the silo effect. And so we introduced AEFIS to get the conversation forward thinking.

Shifting the conversation from: how data is collected and stored – the logistics of assessment – to the real meat and potatoes of it: what data should we be collecting and how should we use it to improve student learning, got the audience more engaged!

The outcomes that ABET expects from students at the completion of degree programs can be difficult to assess and report on, especially if there is little or unsustainable infrastructure in place. Development of assessments can be trial and error based and begs for collaboration. So we dove right into some assessment activity questions:

How does your institution assess ethical components to report on outcomes/objectives?
  • Assessment Measures:
    • Scenario based test questions
    • Developed case studies
    • Field exercise interviews
  • …How is student performance rated / documented?
    • Against a rubric
  • …How often is such a rubric reviewed / adapted?
    • Rarely
How does your institution report on Program Educational Objectives?
  • Assessment Measures:
    • Student certifications post-graduation
  • …Does this demonstrate success or student learning?
    • Open for discussion
  • …How can we increase our response rates for alumni surveys?
    • Open for discussion

These and more questions are being posed by institutions as they plan their assessment efforts. And these questions only started the process of assessing our assessing.

We encourage you to review the questions that workshop participants considered. Download a copy of the workshop materials at our website. Please feel free to share your ideas and comments or let us know what questions we should be asking!

Monday, April 11, 2011

2011 ABET Symposium

AEFIS is looking forward to the 2011 ABET Symposium, April 14-16 in Indianapolis, IN.  Academic partners from Drexel University along with Mustafa Sualp, President of AEFIS and Becky Joyce, Operations Coordinator for AEFIS will be presenting a workshop, "Connecting Assessment to Teaching and Learning to Sustain Accreditation."

80-Minute Mini Workshop
Saturday, April 16, 2011
10:35AM-11:55AM


ABET, Inc. is a known accreditor for college and university programs in applied science, computing, engineering and technology.  They have accredited over 3,100 programs at more than 600 colleges and universities. 

Come visit us at our workshop where we will be available to answer any questions about AEFIS and share more about our AEFIS Partner Program.  Please take a few minutes to learn more about AEFIS at www.goAEFIS.com or contact the AEFIS Team directly at info@goAEFIS.com

Wednesday, April 6, 2011

What is AEFIS?

The AEFIS Team is dedicated to delivering all stakeholders - administrators, students, alumni, faculty, and industry members - up to date information about the foremost assessment and accreditation news and practices. The AEFIS Team expects no less than the best, and neither should you. That's why we have created the Academic Evaluation, Feedback and Intervention System – AEFIS, the web based academic assessment and accreditation management solution. AEFIS streamlines the accreditation process by documenting plans for continuous curriculum improvement and generating relevant, easy to read reports.

AEFIS was designed by a university for universities. In 2003, Drexel University's School of Biomedical Engineering, Science and Health Systems sought a solution for attaining ABET accreditation. That solution was AEFIS. In 2008, after successfully attaining accreditation, the School of Biomedical Engineering, Science and Health Systems received a National Science Foundation (NSF) grant for research efforts that included the continued development of AEFIS's student centered features. In 2009, Untra Corporation purchased the majority license for AEFIS in order to facilitate its commercialization.

Tuesday, February 22, 2011

AEFIS 3.0 Launch Party

Philadelphia, Pennsylvania – March 15, 2010 – The AEFIS Team announces the release of AEFIS 3.0, available on March 18, 2010. The launch will be kicked off with a private reception at Drexel University’s Bossone Research Enterprise Center. The gathering will showcase the enhanced software system and celebrate the collaborative efforts of AEFIS partners and supporters.

United States Higher education is a $400 billion industry with little external or internal accountability for results achieved across approximately 4,000 U.S.-based colleges and universities. All stakeholders – from students and parents to administrators and faculty – struggle with the complexity of measuring and aligning resources efficiently and effectively. During accreditation review periods, thousands of staff-hours are spent manually collecting, analyzing, and reporting on the status of each academic department. Furthermore, the valuable data collected is rarely used in a systematic way to improve the teaching/learning process in higher education.

AEFIS President Mustafa Sualp says, “The AEFIS platform is designed to help college administrators and faculty improve the teaching-learning connection, a problem talked about a great deal, but one that has not had a clear path toward making measurable, department-by-department progress until now.”

AEFIS 3.0 helps solve these problems. A field-tested enterprise software platform, AEFIS 3.0 is designed to measure student and faculty performance in relation to learning outcomes set at varying institutional levels. The real-time data aggregated by the AEFIS Solution Platform can be applied to the established accreditation processes and, more importantly, to effective course and program design, leading to direct benefits for students, faculty, administrators, alumni and the wider industry. By combining a sophisticated technology platform with a very motivated user population, AEFIS provides faculty and administrators with real-time information needed to make beneficial decisions to better support student learning.

The benefits of AEFIS to higher education are becoming widely recognized. Mustafa Sualp, who is also the original architect of AEFIS, is an invited speaker at universities and institutional accreditation conferences worldwide. His 2010 schedule of presentations at numerous acceditation and evaluation conferences across Europe and the United States in the coming months includes: the RosEvaluation Conference; ABET Symposium; the American Society of Enginerring Education (ASEE) Conference; the Frontiers in Education Conference; and the eighth International Conference on Education and Information Systems, Technologies and Applications (EISTA).

About AEFIS Academic Evaluation, Feedback and Intervention System—AEFIS is the web-based academic assessment management solution that automates best practices in assessment and evaluation in order to enhance curriculum development and streamline accreditation processes.
For more information, please contact: Annaliese Cole Untra Academic Management Solutions, LLC 215.873.0800 x1030. Please visit our website for more information.