Advisory Program Research and Evaluation
This article reviews the research literature to bolster the case for advisory and demonstrate that putting it at the core of a school is worth the investment. Lessons from CES schools also reveal the importance of a cycle of collaborative inquiry when planning and implementing advisory.
At its heart, advisory forges connections among students and the school community, creating conditions that facilitate academic success and personal growth. Intuitively, the program makes perfect sense. But that isn’t enough. Maintaining an effective school-wide advisory program requires a substantial investment of resources. So what does the research say?
It’s a tricky question. After all, how do you quantify a relationship? There are no statistics on personalization, and generalizing across the diverse spectrum of advisory programs is virtually impossible. Plus, as advisory facilitator Jeanne Fauci at Los Angeles’ Wildwood School points out, with only so many hours in a day, there’s the issue of capacity. “Although I would like to do a lot more data gathering and evaluation, it’s always hard to balance that with actually doing advisory,” she notes.
Even so, many schools regularly evaluate their advisory programs, and numerous studies examine issues commonly targeted in advisory, such as school culture and students’ participation in school activities. This article explores that research—both the large-scale studies and the school-based evaluations—to answer two basic questions. First, what published evidence suggests that advisories succeed, and second, how can educators evaluate their program?
The Research Literature
Few quantitative, systemic studies have been conducted on advisory, and there is little comprehensive data on its outcomes. With different objectives and components, conducted at different grades, advisory programs obviously net different results. Advisory is also rarely a school’s sole strategy for supporting students and fostering personalization. When a school adopts advisory in conjunction with smaller, longer classes, a focus on project-based learning and performance assessments, and a more democratic leadership model, it becomes tough to measure the results of each individual effort. Did student achievement improve because of advisory, or because kids were encouraged to conduct their own in-depth investigations in class?
Despite these methodological challenges, many narrative accounts attest to advisory’s positive impact. Generally, studies have shown that students who don’t feel an attachment to school staff are likely to have poorer attendance and to drop out more than students who feel that they are part of a supportive school environment. In addition, healthy relationships between teachers and students appear to facilitate academic achievement. Advisory can contribute to this type of positive school climate in several ways, including:
- Improved relationships between students and teachers (Espe, 1993; Totten & Nielson, 1994)
- An increased sense of trust and belonging (Ziegler & Mulhall, 1994)
- Better communication among all members of the school community (Simmons & Kiarich, 1989)
- A strong atmosphere of equality (Putbrese, 1989)
- Reduced student smoking and alcohol use (Putbrese, 1989)
Other studies provide more explicit findings about advisory. In a nationwide survey, Mac Iver and Epstein (1991) investigated the opinions and perceptions of more than 2,000 principals. They found that after controlling for such variables as family and student background, region, and grade organization, principals with effective advisory programs in the middle grades reported stronger overall guidance programs and lower expected dropout rates in high school (though no data about actual dropout rates were provided in the study).
Regarding attendance rates, a study by Simpson and Boriack (1994) looked at a Texas middle school program specifically geared to reduce absenteeism among a group of 70 chronically delinquent students. The researchers found that by reaching out to parents and working closely with students in a daily advisory period, the school was able to generate “immediate and very gratifying” results. Average daily attendance among the students skyrocketed from 76 percent in the first 12 weeks to 95 percent for the next 24.
Overall, the published research on advisory that exists is generally optimistic and indicates that the program leads to the kind of positive outcomes—such as increased attendance—that correlate with improved academic outcomes. Advisory is thus indeed a worthwhile investment—one supported by published research and countless testimonials. As Wildwood’s Jeanne Fauci emphasizes, “In the realm of human experience and relationships, advisory is a really important thing.”
With comprehensive studies of advisory often proving problematic, localized evaluations become much more important. Schools need to find ways to assess their programs effectively and report tangible results, particularly when working to marshal resources behind advisory. As veteran middle school teaching consultant and University of Vermont doctoral candidate Jim Burns makes clear, “Proactive leaders publicize data… prior to discussions of the merits of using instructional time for advisory when speaking to the school board, parents, community, or representatives of the media.” According to Burns, the documentation and utilization of concrete results is a necessary component of any successful advisory program.
In today’s era of diminishing budgets and reduced funding, with schools pressured to implement programs with a proven track record of success, schools need to be prepared to use data to demonstrate advisory’s results. So what are some productive and proactive approaches to evaluating advisory in order to fine-tune the program and demonstrate effectiveness?
Research suggests that there is no one correct way to measure an advisory’s results. The process should be tailored to the specific program, with formative evaluations carried out during the planning and implementation stages and summative assessments completed after each program cycle. Since programs with different purposes and activities produce different outcomes, an advisory evaluator should always clearly specify the objectives under examination by asking essential questions about the program. For example, if a major goal is to promote a college-going culture at the school, it is important to not only measure the number of additional students applying to college, but to also survey those students about their motivations and the effect of advisory.
“As you create your (advisory) program, you also create the assessment instruments and connect them as closely to the avowed purpose of the program as possible,” advisory coordinator Alan Gordon recommends. At Gordon’s school, Souhegan High in Amherst, New Hampshire, students, parents, and teachers have been surveyed about advisory several times a year since the advisory program started in 1992. This collaborative inquiry around advisory has in turn facilitated valuable reflection and analysis at the school.
“The focus of our program has always been personalization, so we focused the surveying very specifically on that early on,” Gordon explains. “Did the kids feel known well? Did parents feel like advisors knew their kids well? Did advisors feel like they knew their kids well? We surveyed everyone early and often, on a scale, say from 1 to 4, from ‘Not known well’ to ‘Known extremely well.’ There were some open-ended areas for comments, and the results directed us in ways that were extremely helpful.”
According to Gordon, advisory has gone from one of Souhegan’s most controversial programs to a core element of the school’s culture, in large part because of the focused, transparent, and actionable inquiry carried out at the school. “(Early) surveys indicated that students felt known well,” he notes. “But as we began to implement a few changes over time, they felt known better. And advisors tended to feel better about the program, as did parents.”
Schools developing an advisory program from scratch can particularly benefit from collaborative inquiry. When a state administrator shut down the middle school in Emeryville, California, last year, students moved to a new Grade 7-12 institution, Emery Secondary School. Mark Salinas, a school coach with the Bay Area Coalition for Equitable Schools (BayCES), helped develop many aspects of the secondary school’s design, creating new structures to foster collaboration among adults, personalize learning for the students, and raise achievement. Features included small learning communities, interdisciplinary teaching teams, and an advisory program. In this nascent environment, formative action research proved vital. Emery’s students were surveyed about the advisory curriculum in September and January of year one, and staff advisors regularly worked through a cycle of inquiry, prioritizing program goals and reflecting on their leadership roles.
“We looked at all the data and we developed our advisory plan for this upcoming year [2004-2005], in regard to the goals we set,” Salinas notes. “Grade-level team members got together in the summer, and as we looked at the survey data about the students’ interests, they mapped that into the advisory curriculum.”
And Salinas hopes to further develop program evaluation and action research at Emery. “I’m pushing for us to talk to students and collect more data this year, for when we do specific initiatives—focusing on attendance or what have you—we can find out more about the effects,” he says. “Was it advisory? Was it your advisor? Was it the small learning community team? What is it that’s contributing?” Researchers agree with this approach, noting that asking advisors and advisees about their perceptions and suggestions is important. However, they also emphasize that subjective reactions and impressions alone are not necessarily adequate outcome measures. Hard, objective data should also be considered whenever possible.
Galassi, Gulledge, and Cox (1997) explain: “For example, if we are interested in knowing whether [advisory] programs are associated with lower dropout rates, fewer disciplinary events, and higher grades, then the most adequate measures would appear to be objective indexes of these variables and not someone’s impressions or opinions about whether dropout rates have fallen and grades and discipline have improved.”
Methodologically speaking, research suggests that the best advisory evaluations consider participants’ subjective impressions in conjunction with objective indexes. As lone measures, data and opinion may be limited, but when considered together, they provide a more accurate assessment of a program’s effectiveness.
Personalization Surmounts Academic and Health Barriers
Essential schools, long accustomed to sharing strong practices around knowing students well, have found themselves sharing the concept, habits, and specific practices of personalization with schools nationwide, thanks to the growing national small schools movement.
Recent findings announced by the Wingspread Group, a forum of over twenty education and health leaders, are likely to increase the call for insight into increasing personal connection among students to school. Published in the September 2004 Journal of School Health, the Wingspread findings contribute substantially to the array of research that demonstrates that students who feel connected to school are less inclined to participate in risky behaviors and more apt to do well academically. Among the Wingspread Group’s specific recommendations is the call to “ensure that every student feels close to at least one supportive adult at school.”
As the “Wingspread Declaration on School Connections” describes, “School connection is the belief by students that adults in the school care about their learning as well as about them as individuals.” Research shows that up to sixty percent of all students, from all backgrounds, feel disengaged and disconnected, making the work that Essential schools have done to understand and nurture personal connections within school communities crucial knowledge for schools nationwide. Validating the CES commitment to personalization in its many forms, these findings particularly underscore the primacy of advisories, the structure most likely to ensure that adult-student connection.
The Wingspread Group’s findings are gathered at www.jhsph.edu/wingspread
The September 2004 issue of The Journal of School Health can be read at
The Importance of Data
Data matters. Producing documented results for students has perhaps never been more important in education than it is today. And although advisory may indeed be all about personal relationships, collaborative inquiry and evaluation are not impossible tasks. When a local politician or policymaker asks “Why advisory?” or “What does it accomplish?” the evidence can save the program. As Mark Salinas observes, “There’s a feeling, seeing the initial data that we have, that what we are doing is making a difference in young folks’ lives.”
Reino Makkonen is an educational researcher and journalist based in San Francisco. He previously served as assistant editor of the Harvard Education Letter and worked for several years as a substitute teacher and textbook editor.
Brown, D.F. (2001). “Value of advisory sessions for urban young adolescents.” Middle School Journal. Vol. 23, No. 4, pp. 14-22.
Dickinson, T., McEwin, C., & Jenkins, D. (1998). “Lessons learned from successful middle level advisory programs.” Focus on Middle School (Ages 11-13): A Quarterly Newsletter for the Education Community. Vol. 10, No. 4, pp. 1-6.
Galassi, J., Gulledge, S., & Cox, N. (1998). Advisory: Definitions, descriptions, decisions, directions. Columbus, OH: National Middle School Association.
Manning, M., & Saddlemire, R. (1998). “High school advisory programs: The Roosevelt Road experience.” Clearing House. Vol. 71, No. 4, pp. 239-241.
National Middle School Association Research Summary #9: Advisory Programs (1996). Accessible online at www.nmsa.org/research/ressum9.htm.
Osofsky, D., Sinner, G., & Wolk, D. (2003). Changing systems to personalize learning: The power of advisories. Providence, RI: Brown University, Education Alliance, Northeast and Islands Regional Educational Laboratory. Accessible online at www.alliance.brown.edu/pubs/changing_systems /power_of_advisories/thepower.pdf
Burns, J. (1996). “The Five Attributes of Satisfying Advisories.” NELMS Journal. Vol. 9, No. 1. Accessible online at www.vla.com/idesign/attributes2.html
Espe, L. (1993). “The effectiveness of teacher advisors in a junior high.” The Canadian School Executive. Vol. 12, No. 7, pp. 15-19.
Galassi, J., Gulledge, S., & Cox, N. (1997) “Middle School Advisories: Retrospect and Prospect.” Review of Educational Research, Vol. 67, No. 3, pp. 301-338.
Mac Iver, D. & Epstein, E. (1991). “Responsive practices in the middle grades: Teacher teams, advisory groups, remedial instruction, and school transition programs.” American Journal of Education, Vol. 99, No. 4, pp. 587-622.
Putbrese, L. (1989). “Advisory programs at the middle level—The students’ response.” NASSP Bulletin. Vol. 73, pp. 111-115.
Simmons, L., & Klarich, J. (1989). “The advisory curriculum: Why and how.” NELMS Journal. Vol. 2, No. 2, pp. 12-13.
Simpson, G., & Boriack, C. (1994). “Chronic absenteeism: A simple success story.” The Journal of the Texas Middle School Association, Vol. 2, No. 2, pp. 10-14.
Totten, S., & Nielson, W. (1994). “Middle level students’perceptions of their advisor/advisee program: A preliminary study.” Current Issues in Middle Level Education. Vol. 3, No. 2, pp. 8-33.
Ziegler, S., & Mulhall, L. (1994). “Establishing and evaluating a successful advisory program in a middle school.” Middle School Journal. Vol. 25, No. 4, pp. 42-46.