Where are the Simon Cowells within the Academy?
By Baylis ?2
I am not an American Idol fanatic, but I will admit that I do enjoy watching it. I will even admit that Simon Cowell is my favorite judge. The reason that I like Simon is because I think he gives the most accurate evaluation of the contestants’ performances. Simon once said something that was almost profound. When asked why he was so hard on the contestants, he said, “I was not hired to be a friend to the contestants. I was hired to help them improve and to help the show to find the best performer.” This philosophy of accurate and consistent evaluations pegged to real world standards suggests a situation that I believe troubles the academy.
There are at least three manifestations of a lack of a Simon Cowell philosophy within the academy. The first place it shows up is with student grades. Many of us use the expression, “Where there’s smoke there’s fire,” whether we believe it or not. A topical search of the Chronicle of Higher Education reveals more than 300 articles or blogs in the Chronicle about grade inflation. That’s a lot of smoke; is there a fire?
A 2001 highly publicized statement by Harvard Professor Harvey Mansfield led to an investigative report by the Boston Globe. The report resulted in a stinging exposé the likes of which were normally reserved for political corruption. The report exposed the fact that many students were receiving A’s and being graduated with honors. Mansfield claimed that the grades that Harvard professors were now giving “deserved to be a scandal.” Some of the claims of Mansfield and the Boston Globe were collaborated by a 1993 U.S. News and World Report article that reported the following statistics: in 1992, 91 percent of all undergraduate grades at Harvard were B- or higher. In 1993, more than 80 percent of all Harvard seniors graduated with honors.
But Harvard was not the only university with these types of numbers. An internal Minnesota State University Mankato article on grade inflation reported an average GPA of 2.93 for all undergraduates, noting that this is nearly a B average. This Mankato report quoted a 1993 U.S. News and World Report article written by John Leo entitled “A for Effort , or Showing Up” ,that suggested only 6 percent of all student grades at Stanford were C’s. Leo also claimed that prior to 1993, Stanford did not permit an F grade.
In 1995 Ron Darby, a Chemical Engineering Professor, wrote a memorandum to the Academic Affairs Committee of the Texas A&M Faculty Senate, on the subject of grading standards. Darby begins his memo by claiming “that a serious situation exists within our university that will probably result (if it hasn’t already) in very serious consequences relative to the credibility and reputation of this institution.” To what situation was he referring? The situation was the “establishment, enforcement, and maintenance of reasonable standards of performance for students, and related qualifications for degrees…”
Darby went to the Texas A&M University Regulations to point out that grades assigned by instructors should be assigned according to the degrees of achievement:
A Excellent
B Good
C Satisfactory
D Passing
F Failing
Darby continued by suggesting that there should be a direct correspondence between these grading levels and the levels of proficiency demonstrated by our students. However, Darby continued by suggesting that the “vast majority of instructors on this campus consider the grade of C to be unsatisfactory.” Therefore, they assign other grades accordingly. Darby gave an example of a report turned in by an undergraduate student in his department. Two other departmental faculty members decried the quality of this report, but noted that instead of the grade of D or F that it deserved, the report had actually received a grade of B-.
Darby with a certain sense of irony noted that the average grade in many of the departmental courses that emphasized technical writing was a B+ or A-. In spite of the fact that the department was frequently reminded by the employers that hired departmental graduates, those communication skills were the greatest weaknesses of Texas A&M grads.
Darby continues by outlining the problems that are a result of grade inflation.
The integrity of the institution can be questioned, if the institution is graduating students who have demonstrated less than satisfactory performance, the institution has lost its creditability in the eyes of the prospective employers of those graduates.,
- The students themselves eventually suffer because when they get into the “real world,” they will find that their sloppy work with which they got by in college will not cut it out there.
- The instructors suffer eventually because those less competent students that are graduated with good grades will reflect negatively on the instructors.
- The institution with low standards suffers via a bad reputation which will eventually effect recruitment of students, faculty, grants and gifts.
- The employers who hire graduates expecting them to be qualified and find that they are not reap the consequences of having to retrain those individuals or replace them with competent employees.
- The public suffers because the public does not get what it is paying for, whether through financial aid to students at private institutions or direct budget assistance to public institutions.
Darby concludes his memo with a proposed solution. Darby proposes that the institution and individual faculty members should adopt essentially the Simon Cowell philosophy for evaluating student performance, i.e., a standard consistent with the standard that they will experience after graduation. If the institution and faculty would do that, everyone would be assured that there was a definite correlation between performance in school and performance later on the job.
A 2002 Chronicle of Higher Education article written by Alfie Kohn began with the statement “grade inflation got started in the late 60’s and early 70’s.” Many people both inside and outside the academy believe this statement. However, Kohn continues by reminding us of the 1894 Harvard Committee on Raising the Standard which suggested that “grades of A and B were given too readily, with grades of A given got “work of no [sic] very high merit” and grades of B for “work not far above mediocrity.” The 1894 Harvard report concluded that “one of the chief obstacles to raising the standards of the degree is the readiness with which insincere students gain passable grades by sham work.” Apparently the concern about grade inflation among faculty has been around for more than a century.
Is grade inflation really a problem; and should we be concerned? What’s wrong with inflated grades? Ron Darby’s memo outlines a number of problems with inflated grades. What are the purposes of grades? Many in education and most people outside the academy believe that grades are supposed to be a gauge of how much a student knows or how well he or she can do something. If we inflate grades, it gives students and others an improper evaluation of the knowledge or skills of the given student, making us susceptible to the problems Darby outlines.
The second place we need the Simon Cowell philosophy is in the annual evaluations of both faculty and staff. At one institution at which I worked, I was asked by the Director of Human Resources to help design a better annual evaluation form. Why? The form that was in use had a number of characteristics listed relevant to the particular job under consideration. The supervisor was asked to indicate for each characteristic whether the employee met expectations, exceeded expectations, or failed to meet expectations. One year over 80% of employees exceeded expectations on 75% of the characteristics listed for their jobs. We were living in Lake Wobegon.
The one change to the form that I suggested was to require supervisors who gave an employee anything other than a rating of met expectations, to also give a concrete example what the employee did to exceed or fail to meet the expectations. The supervisor was also required to discuss the ratings with the employee in a short given time frame and the employee was given the opportunity to append a statement explaining his or her perspective on this matter. The number of employees who exceeded expectations dropped dramatically and very few employees disputed their fail to meet expectation ratings. After several years of using this new form, the Director of Human Resources had a paper trail to assist in decisions about promotions or dismissals. Since the employee had an opportunity to discuss each review annually and respond to what he or she thought were errors, the employee could not claim that he or she didn’t know how his or her performance was viewed by the supervisor and the institution.
- In the Match 28, 2010 issue of the Chronicle Review, Ben Yogoda expressed the usual faculty thinking about annual evaluations in his article, Why I Hate Annual Evaluations. These evaluations are useless and evaluate the wrong things, or evaluate the right things improperly or in the wrong ways. Since many faculty members have a predisposition to the conclusions expressed by Yogoda, the mounds of research to the contrary are of no avail. If you have s positive interest in SEF or are dead set against them, you should check out Peter Centra’s book Reflective Faculty Evaluation and the research that is available at The Idea Center website www.ideacenter.org. Another good resource on faculty evaluation is a handbook for college faculty on art of evaluation and developing a comprehensive evaluation system written by R. A. Areola, (2000), entitled. Developing a Comprehensive Faculty Evaluation System: A Handbook for College Faculty and Administrators on Designing and Operating a Comprehensive Faculty Evaluation System. None of these three resources will mention Simon Cowell by name. But all three emphasize the necessity of honest and consistent evaluation of performance measured against the ideal professional standard, which is what Simon calls for and is criticized for doing. However, it we follow this path, it will find us the best performer, whether on stage or in the classroom.
A post on Sprynet.com by Michael Huemer, entitled Student Evaluations: A Critical Review attempts to highlight the enormous body of literature on student evaluations of faculty performance (SEF).Sprynet is an inexpensive web posting service provided by Earthlink. Thus I will have to admit that Huemer’s posting is not peer-reviewed. However, most of the literature Huemer cites is peer-reviewed. I found very interesting two notes on the validity of ratings of instructors. The first note was on whether ratings of instructors change as years pass. Peter Centra in his book Reflective Faculty Evaluation makes the claim that SEF tend to correlate well with retrospective evaluations by alumni. Former students do not change their perceptions of their instructors over time. The second note was that one of the favorite evaluative methods espoused by faculty at least for tenure was found in multiple tests not to be valid. This evaluative method is peer evaluations and peer observations. In an article that appeared in Volume 52 (1997) of American Psychology written by Herbert W. Marsh and Lawrence Q. A. Roche, it was shown that multiple tests of ratings by colleagues and trained observers did not substantially agree with each other’s ratings of a given instructor. Thus these ratings were found not to be reliable which is a necessary condition for validity. This reminds me of the ratings given by the judges on American Idol. Many times the ratings and critiques did not agree. Why would faculty prefer peer evaluations when faculty generally close their classrooms off and don’t let each other know what they are doing? I have some suspicions. One is that faculty believe that colleagues will be easier on them than students since they are going through the same trials and tribulations. Here is where we need Simon Cowell as a colleague to give an honest assessment of our performance.
The third place the Simon Cowell philosophy is needed is on promotion and tenure committees. In spite of the recent publicity about professors being denied tenure, what happens when we dig into the statistics about tenure denials and tenure approvals? Peter Fogg in a Chronicle of Higher Education article entitled “No, NO, a Dozen Times No”[1] discusses the recent history of tenure decisions at University of North Texas., in particular, one year in which 12 faculty members up for tenure were denied. Fogg makes the claim that until 2003, getting tenure was almost a sure thing at UNT since “only one of the 33 professors who ever went up for tenure was denied. The year before, none of the 25 professors who applied got the thumbs down”
When I assumed the reins of CAO at one institution, I reviewed the recent promotion and tenure decisions at that institution. There was one recently tenured and promoted faculty member that caught my attention because I kept hearing rumors of incompetence. When I investigated, I came to believe the rumors. When I inquired of the Chair of the P & T committee concerning the rationale for promoting and granting tenure to this individual, I was somewhat surprised by the answer. The Chair responded that the committee knew the faculty member was not a good teacher; however, the faculty member was a great person and was a close friend of many members of the P & T committee. Their children played together. How could they turn such a good person out into the streets? Tenure’s Up or Out Policy is difficult to apply to friends. In the meantime, the department in which this faculty member taught was suffering badly and rapidly losing students each year. I had to remind the P&T Committee of its responsibility to help provide quality education for students and not to reward their friends. We needed Simon Cowell on our P & T Committee.
[1] Information taken from Peter Fogg’s article, No,NO A Dozen Times No” which was written October 1, 2oo4, but appeared in Chronicle of Higher Education April 20, 2010 in the Faculty Volume 51, Issue 6, Page A12