Tuesday, May 15, 2012

A Syllabus Manifesto


Many instructors struggle with students floating through degree programs without understanding how any assignments or exams fit into their courses or curricula. Further, administrators struggle to account for the time that faculty spend developing syllabi and mapping coursework to specified learning outcomes. Preparing students for their post graduate lives is rooted in sharing an understanding of expectations. These expectations reach back to their elementary questions – why do I need to know this? when am I ever going to use this? Similarly, potential employers want to understand what students are learning and if their skill sets will align with industry needs. Other stakeholders including accrediting bodies and prospective students seek answers to these questions as well. The most appropriate medium to answer these questions, organize instructional tools, and account for course development is the course syllabus. That’s it – the answer is in the syllabus – but that can only be the solution if the syllabus is a living and accessible document.

We have discussed the idea of the syllabus as a contract between students and instructors to describe the expectations of both parties (April, 2011, http://aefis.blogspot.com/2011/04/why-are-we-assessing.html). We were excited to see this perspective articulated by the Syllabus Institute in its tenants of the Modern Syllabus. It summarizes three main roles of the syllabus: contractual, assessment, and marketing. As a contract, the syllabus outlines objectives, assignments, policies and other general expectations. In assessment and accreditation processes, the syllabus supports continuous review of outcomes, consistency in practices across the institution, and ensures curricular excellence. With nearly unlimited options in the higher education spaces, prospective students can base decisions on exciting, changing, and unique course offerings – which can be marketed with the syllabus. Similarly current students can be better prepared for coursework and make more informed class selections if they have access to syllabi.

Syllabi are the vehicle for the content of a course and the dissemination of instructors’ ideas to students, institutions, and potentially the public. Think about ways to use them, share them, and engage audiences to appreciate them through web-based platforms - online availability is the first step in this streamlined dissemination. Let us know what you are doing with your syllabi and ideas for moving them forward…
Becky Yannes, AEFIS Team

Friday, January 27, 2012

"Accountability Yes, Hierarchy No"

By: Joshua Kim

AEFIS Response:


There is a great deal of risk in inviting large populations to provide feedback to make improvements. This is true when asking students to provide feedback about coursework and faculty practices; and it is also true when seeking feedback from software users. Not being able to meet requests or satisfy expectations minimizes clients’ trust and reduces their likelihoods to provide additional feedback. However, such feedback fuels innovation and our development model welcomes it. Balance must be found in engaging leaders, primary decision makers, and the greater populations of users outside of the leadership realm. This idea of hierarchies and accountability is discussed in recent posting on Inside Higher Ed’s “Technology and Learning” blog, “Accountability Yes, Hierarchy No.”

Determining these hierarchies is a struggle that our company tackles regularly. Our client partnership model has allowed us to sit in on and lead great discussions regarding assessment, accreditation, and how institutional stakeholders view opportunities and challenges. Our partnerships have attracted clients who are willing to engage in these conversations and view assessment, accreditation, and student learning as priorities that require consistent enhancement on campus. Joshua Kim, author to the blog mentioned earlier, questions the best approach to minimizing hierarchical boundaries to allow for innovative collaboration and production for educational technologies, while maintaining efficiency. With this point he notes that such an environment will have to accept and even promote risk taking.

Innovation is a celebrated concept. But before innovation usually come: costs, mistakes, failures, and delays. And these are not so celebrated. Ultimately both technology developers and end users have to promote risk taking, and accept the challenges that may be faced before the most successful solution to the problem at hand is realized.


Becky Yannes, AEFIS Team

Monday, October 31, 2011

Getting Over the Hill - Using Mid-Term Course Evaluations Effectively

The middle of the academic semester can bring a slump in students’ attitudes about coursework. Halloween festivities, football season, and eyes set on fall breaks ahead all serve as great distractions from assignments and class participation. Institutions and instructors have opportunities to regain students’ focus and gauge student learning through mid-term course evaluations. Such evaluations provide an outlet for students to comment on the strengths and weaknesses of their courses. This feedback can be used to improve course structure moving forward and to assist faculty in engaging their student audiences. Many institutions and individual instructors use mid-term course evaluations as tools for gathering data to develop a continuous feedback loop. This strategy has proven successful at some institutions in increasing student participation in end of term course evaluations that follow (Enyeart 15).

Using online tools to survey students has demonstrated an increase in written feedback on course evaluation forms. Students are not limited by time constraints that may be associated with in-class paper-based course evaluations. It was documented at one institution that the number of words typed by students for open-text questions on web-based course evaluations was seven times greater than those written on paper-based evaluations (5).

Robert T. Brill, PhD., an associate professor of psychology at Moravian College, shared, in a blog posting at www.facultyfocus.com, his three-option feedback approach. This strategy poses questions about specific course components such as text books, assignments, lectures, etc. He asks his students to respond with one of three options: keep as is, keep but modify, or remove from the course. Such questions ask students to justify their responses – their feedback provides support to make changes in the classroom and, more specifically, what changes to make. It also provides support to continue successful practices in the classroom. Dr. Brill asks questions specific to the student learning outcomes for the course – this brings students’ focus to the educational objectives detailed in course syllabi. He welcomes written responses, but also invites students to participate online through his institution’s learning management system.

The Education Advisory Board’s 2009 study on online student course evaluations referenced in the beginning of this post provides example questions from one of the universities involved in its research efforts:

List the major strengths of this course. What is helping you learn in this course? Please give a brief example.



List any changes that could be made in the course to assist with your learning. Please suggest how changes could be made.

The Director of Assessment at this university noted that “[Using mid-term course evaluations] is a very simple, very easy-to-implement way of telling students that their feedback is valuable to them, and it always, always, always improves end of semester course evaluation response rates.”

Students support mid-term course evaluations because they demonstrate instructor interest in student opinions. Jeff Wojcik, the College of Literature, Science, and the Arts Academic Relations Officer at the University Michigan, wrote an opinion article in the Michigan Daily earlier this year detailing:

Unlike end-of-term evaluations, which can only create improvements for future students, instructors benefit from midterm feedback because they can augment their teaching, if necessary, for students who are currently taking the course. This immediate response can help students learn better and allow professors to adopt a style that best accommodates specific semesters and sections of students. Feedback also allows students to indicate an interest in a relevant political topic, a small change to lecture slides or other suggestions that might not warrant a meeting with a professor.

Learn more from the sources referenced in this post…

Online Student Course Evaluations: Strategies for Increasing Student Participation Rates

Christine Enyeart, Michael Ravenscroft, Education Advisory Board, May 8, 2009
http://tcuespot.wikispaces.com/file/view/Online+Student+Course+Evaluations+-+Strategies+for+Increasing+Student+Participation+Rates.pdf

Faculty Focus

“How to Make Course Evaluations More Valuable”
Robert T. Brill, February 29, 2009

http://www.facultyfocus.com/articles/effective-teaching-strategies/how-to-make-course-evaluations-more-valuable/

Michigan Daily

“More than final feedback”
Jeff Wojcik, February 14, 2011

http://www.michigandaily.com/content/jeff-wojcik-implement-midterm-course-evaluations

Becky Yannes, AEFIS Team

Wednesday, August 3, 2011

Well Informed Customers Serve Institutions Best

There are many factors that go into high school students’ college selection processes. College Prowler (www.CollegeProwler.com), a website dedicated to sharing students’ statistics on universities around the country, breaks the factors into twenty defining characteristics. However, academics are only at the top of the list because of alphabetical organization. Yet, students are hitting the nail on the head when it comes to what information they want to know about academics – relevance, effectiveness, and flexibility.

Does this mean that collecting quality assurance data should be left up to college marketing personnel – to be used only to attract prospective students? No, it means that there are opportunities for data use across an institution. Current university students’ voices are carried to prospective students – they are the primary source of information for both students and parents who are looking at higher education institutions. Thus, it is important that students are aware of quality assurance efforts at their schools and get involved in the process through regular course evaluations. But, students also need to be aware of assessment data that is collected in the classroom and how it is used to improve programs.

How can students realistically be involved in these processes? As institutions develop transparent means to collect assessment data, it can and will be more easily shared across institutional levels. This includes online course evaluations with immediately portable data. Assessment data recorded online can also be immediately portable and easily manipulated for various audiences including students. This information would be most appropriately shared with students through advising processes. Also, publishing feedback from alumni can be most valuable for institutions, faculty, current and prospective students.

The customer is always right is not an appropriate model for higher education – but well informed students, who understand institutional feedback loops, make great customers who share positive experiences with prospective customers – the next generation of [tuition paying] students.
Becky Joyce, AEFIS Team

Monday, July 18, 2011

"Teaching Them How to Think"

By: Dan Berrett

AEFIS Response:
Pat Hutchings, senior associate at the Carnegie Foundation for the Advancement of Teaching, was quoted in the recent Inside Higher Ed article “Teaching Them How to Think” by Dan Berrett. Our Team shares his sentiments:

“’Assessment means asking if students are learning what I think I'm teaching,’ said Hutchings. ‘My sense is that what we need to think about now is how faculty can take back assessment. It's been possessed by others, if you will.’”

If you have met with our Team to discuss assessment, you have heard our two mechanisms for measuring student learning:

1. Brain surgery – really opening things up and seeing what students are understanding,

2. Organizing direct and indirect measurements to:

        A. Survey students on what they think they are learning,

       B. Ask students direct questions on specific topics and evaluation results,

      C. Set outcomes and performance criteria,

     D. Survey graduates on job searches and career successes,

     E. Survey faculty on what they think they are teaching.


Our Team recommends the latter because the first can get messy. There is proven success that this sort of data collection can be used for continuous course and programmatic improvement – however, Hutchings notes an important point in the feedback loop. Instructors are the link between students and institutions – they are the faces of higher education. Thus, it is important for faculty to embrace and receive clout for innovating assessment practices.

Focus on curriculum vitae for promotion and publication processes is so great that there is room for valuable involvement and impressive experience in assessment and efforts to improve student learning in higher education. Consider the multitude of assessment conferences with opportunities for publication annually and the environments for assessment discussion and research.

When next making updates to your CV – take a second look at those few areas that you have room for growth: department service, major administrative responsibilities -- and find ways to work with your departments to get involved in assessment.

Becky Joyce, AEFIS Team