Attitudes towards school self-evaluation

  • Published on

  • View

  • Download


  • nr

    1, 2

    Studies in Educational Evaluation 35 (2009) 2128









    rks (


    Contents lists available at ScienceDirect

    Studies in Educati


    Schools are increasingly being asked to shoulder a greaterproportion of the responsibility for developing and guaranteeingeducational quality, which involves, among other things, theirbeing expected to engage in self-evaluation. This means that theyare required to arrive at an appraisal of their current functioning(strengths and weaknesses) as a point of departure for a plan orvision for the future. Self-evaluation is a procedure which isinitiated and carried out by the school in order to describe andevaluate its own functioning (Blok, Sleegers, & Karsten, 2005, p. 3).In the last decade self-evaluation formats have become or arein the process of becoming a commonplace activity in manyschools. An analysis of the literature on self-evaluation reveals thatexpectations with regard to results are to a large extent positive,although there is, as yet, limited evidence to support this positivepicture. Despite the fact that the introduction of self-evaluation iswidely applauded, there are serious question marks about thequality of self-evaluations as they are currently practised. Thisraises the issue as to how far self-evaluations are beingimplemented in a manner which will yield worthwhile resultsand how differences in the quality of self-evaluations can beexplained. The attitude towards self-evaluation is often suggestedas a crucial factor in this. Self-evaluation can only work if teammembers are positively disposed towards it (McBeath, 1999). Thus,one pre-condition which favours a worthwhile self-evaluation is

    achieving an awareness that self-evaluation is a meaningful andfruitful activity. All things considered, self-evaluation is not some-thing to be embarked on lightly (van Aanholt & Buis, 1990, p. 19).

    There are indications that attitudes towards self-evaluation aregenerally not positive and it would appear that there is insufcientawareness in schools of the objectives and usefulness of self-evaluation (Schildkamp, 2007). The fact that schools in Flandersmostly have experience with self-evaluations that are imposed onthem by government would also be likely to contribute to self-evaluation being seen more as an obligation the principalobjective of which is being compliant (i.e. meeting ones statutoryand regulatory obligations) rather than as a tool for improvingthe schools functioning as an educational institution (VanPetegem, Verhoeven, Buvens, & Vanhoof, 2005). There is alsoevidence of a lack of openness within school teams and anunwillingness on the part of schools to look critically at their ownperformance. It would seem, therefore, that staff are often notmentally ready for carrying out a self-evaluation. Moreover, it isfurther apparent that, inmany schools, identifying and confrontingproblems, questions, doubts, etc., and discussing these openly is byno means standard practice (Schildkamp, 2007). Evaluation andself-evaluation are still all too often seen as something threatening.Teachers still jealously guard their autonomy in the classroom andregard evaluation (self- or external) as a form of social control(MVG, 2006; Van Petegem et al., 2005). A positive attitude whichis the necessary foundation for implementing self-evaluation isthus often absent. Among other things, the perceived onerousnessof self-evaluation appears to play an innovation-inhibiting role.The same studies also suggest that school principals and teachersshare the same resistance towards the added paperwork that

    * Corresponding author. Tel.: +32 32204143; fax: +32 32204998.

    E-mail address: (J. Vanhoof).

    0191-491X/$ see front matter 2009 Elsevier Ltd. All rights reserved.doi:10.1016/j.stueduc.2009.01.004Attitudes towards school self-evaluatio

    Jan Vanhoof *, Peter Van Petegem, Sven De Maeye

    Antwerp University, Institute of Education and Information Sciences, Universiteitsplein

    A B S T R A C T

    Research reveals that a

    successful school self-eva

    investigates whether scho

    report on a survey study

    research shows that resp

    results of self-evaluation t

    principals exhibit a more

    attitude towards self-eval

    where the respondent wo

    criteria of a professional l

    journal homepage: ww610Wilrijk, Belgium

    tive attitude towards self-evaluation is a pre-condition which favours

    ion. This article describes how self-evaluation is regarded in schools and

    characteristics can explain differences in the attitude of individuals. We

    ducted among 2716 school principals and teachers in 96 schools. Our

    ents expressed themselves more positively with regard to the possible

    with regard to the self-evaluation process itself. We also found that school

    sitive attitude than teachers. Multi-level analyses demonstrate that the

    on is related to the characteristics of the broader functioning of the school

    such as school culture and whether or not the school concernedmeets the

    ing community).

    2009 Elsevier Ltd. All rights reserved.

    onal Evaluation

    . e lsev ier .com/stueduc

  • J. Vanhoof et al. / Studies in Educational Evaluation 35 (2009) 212822a self-evaluation brings with it (Van Petegem et al., 2005).Nevertheless, the authors claim that head teachers are generallymore positively disposed towards self-evaluation and that they aremore convinced of the usefulness of self-evaluation activities thanare their teachers. The attitude of principals towards self-evaluation is generally positive. Thus, head teachers in theNetherlands, for example, regard self-evaluation as a useful andinstructive undertaking. They take the view that self-evaluationyields a reliable picture (Blok et al., 2005) and see it as a viableactivity, although admitting that it takes up a lot of time. McBeath,Meuret, Schratz, and Jakobsen (1999) arrive at similar conclusionsin their evaluation of a project in which 100 European schoolscarried out a self-evaluation. There are also indications that theattitude of head teachers is markedly more positive than that ofteachers. Van Petegem et al. (2005, p. 288) state, for example, thatthe attitude of teachers vis-a`-vis self-evaluation is often quitedismissive. School heads are by and large more positively disposed and

    are more convinced of the usefulness of self-evaluation activities.These ndings coincide with an ongoing trend whereby carryingout self-evaluation is becoming an increasingly importantcomponent of the quality assurance system used in education.In this context, knowledge and understanding of attitudes towardsself-evaluation is crucial, not least because of the existingempirical evidence pointing to the link between conceptionsand behaviour (Kellgren and Wood, 1986).

    Another question which needs to be asked concerns the schoolcharacteristics which are related to attitudes towards self-evaluation. The impact on school policy of organizationaleffectiveness (Quinn and Rohrbaugh, 1983) and of schoolcharacteristics whichmight suggest the existence of a professionallearning community has repeatedly been demonstrated (Grifth,2003; Levine & Lezotte, 1990; Mortimore, Sammons, Stoll, Lewis &Ecob, 1988; Sammons, Hillman & Mortimore, 1995). The expecta-tion is thus that school characteristics such as encouragement bythe head teacher can inuence attitudes towards self-evaluationin both a positive and a negative sense (Saunders, 2000). Althoughattitudes towards self-evaluation have already been the subject ofresearch (Blok et al., 2005; Devos et al., 1999;McBeath et al., 1999),the link with school functioning has not yet been explicitlyinvestigated. This link is, however, of considerable importance in apolicy context in which schools are expected to design and executetheir own self-evaluation initiatives on an autonomous basis. Thisexpectation could place very great demands on schools withcertain characteristics. Lack of evidence regarding the impact ofschool characteristics on attitudes towards self-evaluation makesit difcult to assess exactly what can be expected of schools in thisarea. Kyriakydes and Campbell (2004, p. 32) have expressed theview that: the eld of school self-evaluation is in an early stage ofdevelopment.

    All of this suggests that a considerable investment still needs tobe made in expanding the knowledge base regarding attitudestowards self-evaluation and how the attitudes of team membersand head teachers are related to the characteristics of the schoolsin which they work. This article sets out to make a contribution inthis direction by formulating an answer to the following researchquestions: (1) What is the attitude of school principals and teacherstowards self-evaluation and what are the differences in this regard?

    and (2) What school characteristics can explain the differencesobserved in this attitude?

    We expect that the average attitude of all respondents towardsself-evaluation will be positive to a limited degree. However, thisgeneral picture conceals a large variation. We expect, for example,that head teachers will express themselves more positively withregard to self-evaluation than teachers. Equally, we expect that thenull hypothesis which states that there are no differences betweenrespondents and schools with respect to this attitude will have tobe rejected. We also expect that the attitude of the respondentstowards self-evaluation will be more positive in accordance withthe extent to which schools (a) are more organizationally effective,in the view of those respondents and (b) more closely correspondto the idea of a professional learning community. In the course ofthis articlewewill clarify these concepts from a theoretical point ofview and discuss their relevance.

    Theoretical framework and operationalizations

    The attitude of teammembers towards self-evaluation has to besituated and understood in the context in which this attitudearises. That context is in reality very broad and thus needs to bedened and demarcated. In order to examine relevant school-related characteristics, we used two lines of approach: schoolculture and the extent to which the school concerned can becharacterized as a professional learning community. Theseschool-related characteristics constitute the independent variablesin our model. The dependent variable is the attitude towards self-evaluation. The different variables are explained below. We willdiscuss the psychometric characteristics of the tools used insection 4 Tools and clusters.

    Dependent variable: attitudes towards self-evaluation

    This study focuses on attitudes towards self-evaluation. We setout to describe the attitude of head teachers and team membersand to explain the differences which exist. Specically, we areinterested in their personal attitudes vis-a`-vis self-evaluation. Anattitude indicates how positively or negatively an individualstands with regard to a particular issue (Petty andWegener, 1998).Given that self-evaluations are complex phenomena, we looked atattitudes towards several different aspects. In order to permitcomparisonwith the ndings of other research,we operationalizedattitudes towards self-evaluation using contrasting statementsdrawn from earlier studies (Blok et al., 2005; Meuret & Morlaix,2003). These are, for example: self-evaluation tells us nothingnew versus teaches us a great deal; self-evaluation takes up a lotof time versus takes up very little extra time and self-evaluationonly involves a few people versus involves everybody. Thestatements elicit respondents views concerning the purpose andvalue of self-evaluation; its viability and complexity; the expectedeffects; the systematic nature of self-evaluations and the extent towhich they are perceived as revealing useful insights.

    Independent variables

    In setting out the objectives of the study above we referred toschool culture and school characteristics which constitute thehallmarks of a professional learning community. These are theindependent variables which we have adopted in order to look atdifferences in school functioning. Our decision to opt for these twolines of approach was based on evidence which suggests that theyare related to the various school process and product character-istics and because of the opportunities they offer in terms ofexamining differences between schools. Given that research intoself-evaluations in schools is still a largely uncharted territory,connecting our research to these established lines of approachseemed to us to be a logical choice.

    School culture: organizational effectiveness of the school

    In order to examine school culture we used the effectivenessperspectives identied by Quinn and Rohrbaugh (1983). Quinn andRohrbaugh (1983, p. 369) developed amethodology for ascertainingthe effectiveness of a knowledge system using four effectivenessperspectives which in turn are determined by two independent

  • J. Vanhoof et al. / Studies in Educational Evaluation 35 (2009) 2128 23dimensions. Therstdimensionconcerns theorganizational focusofthe entity (in this case, the school) and ranges froman internal focuson the people in the organization to an external focus on theorganization itself. The second dimension represents the contrastbetween stability and control on the one hand, and exibility andchange on the other. These twodimensions result in four quadrants.The 4 perspectives are: the Human relationsmodel (creating close-knit teams/meeting groups in order to achieve openness, collabora-tion, loyalty, involvement andmotivation); the Open systemmodel(modifying objectives on time to achieve optimal suitability withrespect to the context); the Internal process model (creating clearrules in order to achieve control and stability); and the Rational goalmodel (ensuring that activities are appropriately geared towardsthe achievement of productivity and efciency).

    The above model is also referred to as the competing valuesmodel as the dimensions in themodel seem at rst sight to conveycontradictory messages. For example, we want schools to becontrollable and stable organizations, but we also expect adapt-ability and exibility. The different quadrants can thus better beapproached as a cohesive whole. The four quadrants and thevalues which underlie them are very illuminating as a means ofunderstanding the attitude and behaviour of people in schools.Furthermore, although the different characteristics of the modelmay well seem to contradict each other, they are not mutuallyexclusive (Quinn & Rohrbaugh, 1983, p. 374). What makes thesequadrants so instructive is the principle that all four must apply toat least a minimum extent in order to be able to regard a school asan effective organization (Quinn, Faerman, Thompson, & McGrath,2003). It is up to schools to nd a suitable balance between thedimensions. The desired balance cannot be established on auniform basis (Maslowski, 2001). In the analysis wewill work withclusters of schools (school proles) in order to adequately reectpossible variations in the balance. The operationalization of theeffectiveness perspectives is based on the School cultureInventory (Maslowski, 2001). The question asked is to whatextent respondents are of the opinion that other people in theirschool regard a given set of values as important. For the internalprocess model the following values are included: stability,continuity, formal approach, consistency, control, regulation andsecurity.

    The school as a professional learning community

    Operating an effective school policy is now less and less seenas a process of incorporating knowledge developed elsewhereinto the specic approach adopted in the individual schoolconcerned. Nonaka and Takeuchi (1995), for example, take theview that implementing innovation and policy in organizationsdepends on the capacity to create knowledge inside the orga-nization. An entitys capacity to develop and apply knowledge atall levels is increasingly regarded as a core organizationalcompetency. With regard to professional development in schoolsthere is, in other words, a change of emphasis from instruction(processing knowledge) towards learning (creating knowledge)(Donnenberg, 1999; Patriotta, 2004; Wenger, 1998). The mostsuccessful organizations irrespective of whether these areprot or non-prot organizations appear to be able to link thepersonal development of the individuals who make up theorganization to an improved effectiveness at organizational level(Senge, 2001). In the context of research into self-evaluations it isalso argued that connections should be sought with theunderlying principles of a professional learning community(Devos et al., 1999, p. 38). In order to explain differences inattitudes towards self-evaluation, we specically focused within the concept of a professional learning community onlearning activities and learning outcomes in schools and on theattitude of team members with respect to their own learning.It also true of schools that the attitude of individuals towardstheir own learning isan important componentof theorganization. Inorder to investigate the conceptions regarding their own learningheld by school staff we used a framework drawn from the socio-constructivistic theory of learning. The core concept behind thistheory is that learning is a process which is active, constructive,cumulative, self-regulatory, target-oriented, context-related andsocial (Herman, Aschbacher, & Winters, 1992). In order to elicitrespondents perceptionswith regard to their own learningweagainused contrasting statements. For example: Learning is passivelyreceiving information versus Learning is thinking critically andmaking connections; and I dont think that you get much fromdiscussions with other people versus You learn a lot fromcomparing your opinion with that of other people.

    Learning activities and learning results focus on the nature ofthe learning processes which take place in schools. In the presentstudy we investigated variations in these areas using two lines ofapproach: the distinction between single loop and double looplearning (Argyris & Schon, 1978), on the one hand and the theory ofknowledge creation (Nonaka & Takeuchi, 1995) on the other hand.Argyris and Schon (1978, p. 2) believe that the way organizations(in this case schools) react to problems is indicative of the learningcharacter of the organization. Learning in a learning organization isa dynamic process whereby not only is new knowledge added tothe knowledge base of the organization (single loop learning), butalso dominant conceptions, norms, policy directions and objectivesaremodied (double loop learning). Double loop learning is a crucialelement in successful self-evaluations (Scheerens, 2004). In singleloop learning, when something goes wrong, another strategy issought which makes it possible to arrive at a solution, but this isdonewithin the existing framework of objectives, values, plans andrules. The framework itself is not brought into question. In doubleloop learning, on the other hand, assumptions are challenged and(practical) theories are compared in order to come up with a newtheory. This kind of learning may indeed result in a modication ofthe objectives, values, plans and rules of the organization. Whencollecting data for the present study, we asked the respondents toindicate how often people in their school reacted in response toproblems in a given number of ways. Examples of such reactionsare (single loop): doing nothing; failing to recognize the problemorminimalizing it; analyzing the problem, but taking no action; and(double loop): gathering the different opinions of the variousparties involved; collecting objective data in order to the examinethe problem; and reecting critically on their own values andnorms.

    The nal independent variable is based on the theory ofknowledge creation (Nonaka & Takeuchi, 1995). Nonaka andTakeuchi (1995) take the view that, in the Western world, there isoften an excessive emphasis on explicit learning in the formof forexample education and courses and that a greater value shouldbe attached to tacit learning in which the use of experience andintuition plays an important role. They focus on four types oflearning processes, which, in their view, constitute the motor ofthe knowledge creation process. For Nonaka and Takeuchi theconversion of knowledge is always a social process. It is thereforenot a process which takes place in individuals, but rather a processbetween individuals. The model focuses on four types of knowl-edge conversion: externalization, which is the process by whichpersonal knowledge is expressed in explicit concepts; combina-tion, which is the process bywhich concepts are synthesized into aknowledge system; internalization which is the process by whichexplicit knowledge comes to form part of personal knowledge; andsocialization which is the process by which experiences areshared and personal knowledge is created. In order to bring aboutorganizational knowledge creation, the accumulation of personalknowledge at individual level must be transferred to other

  • members of the organization through socialization, therebycreating a knowledge spiral (Nonaka & Takeuchi, 1995, p. 82). Ifthis does not occur the existing knowledge base of the organizationwill not be expanded (Simons & Ruijters, 2001). According toNonaka and Takeuchi (1995, p. 84) only an interaction betweenpersonal and explicit knowledge can create opportunities forinnovation. They also describe organizational knowledge creationas a continual and permanent interaction between personal andexplicit knowledge and refer to an accompanying knowledgespiral. The theory of knowledge creation has been operationalizedby examining to what extent processes in the four components ofthe knowledge spiral take place in schools (Chou & Tsai, 2004; vonKrogh, Ichijo, & Nonaka, 2000). Examples of such items include: towhat extent is it a customary practice in your school to report onyour own experiences? to receive coaching from experts? or toput ideas which have been acquired on a training course intopractice? and to observe other people at work?.


    This article reports on a survey study conducted among schoolprincipals, teachers and support staff examining their attitudetowards self-evaluation and how they perceive the functioning of

    represented differences in the population. For this reason, wecontacted schools by a variety of different channels in order toinvite them to take part (e.g. via training institutions, pedagogicalsupervision, governmental communication, etc.).

    The sample consisted of 96 schools, of which 51 were primaryschools and 45 were secondary schools. In the 96 schoolstaking part in the study 2716 respondents were surveyed. Themajority of the respondents were teachers. 85.1% of therespondents said that they spent most of their time teaching.Members of school management teams constituted a total of 7.3%of the sample. 3.9% of the respondents were head teachers ordeputy heads; 3.4% indicated that most of their time was takenup by management activities (policy support, technical con-sultancy, etc.). Finally, there was also a signicant group ofancillary staff: 4.7% of the respondents were principally engagedin administrative or ICT related support activities. Just under 3%were responsible for pedagogical or paramedical support at theschools studied.

    Instruments and school clustering

    Before going on to look at the actual results of our study wewillreport on the various preparatory analyses conducted relating to













    J. Vanhoof et al. / Studies in Educational Evaluation 35 (2009) 212824their school. The target population used in this study consisted ofall those schools in Flanders (including both primary andsecondary schools) which carried out an instrumentalized self-evaluation in the academic years 20042005 or 20052006.Instrumentalised means that a written questionnaire was issuedto teachers. This population cannot be clearly described anddemarcated as there is no database of Flemish schools which haveconducted a self-evaluation using a questionnaire for teachers(instrumentalized). Given that we were unable to take a sample inthe classic meaning of the term, we applied a number of principlesin looking for potential sample schools. One of these principles wasto survey as many schools as possible with a view to reducingsample error (Creswell, 2005). A second principle was trying toreect the expected variation in the target population, bywhichwemean, for example, variation in educational level, variation inschool network, variation in tools used and variation in previousexperiences with self-evaluation. Our aimwas not to represent thevariation in the target population (which is in any case unknown)proportionally, but, instead, to ensure that the cases in the sample

    Table 1Scale characteristics of the dependent and independent variables.

    Scale Number of items N SEM


    Attitude toward self-evaluation 12 1313 174.

    EFP: Internal process model 7 2042 30.

    EFP: Human relations model 7 2042 31.

    EFP: Open system model 7 2042 11.

    EFP: Rational goal model 7 2042 17.

    Socio-constructivistic view on learning 6 2519 27.

    Single versus double loop learning 12 2056 53.

    Knowledge creation in the school 13 2090 291

    Table 2Results of the cluster analysis of schools.

    3 Clusters Number of schools %

    Cluster 1: Strong exible/weak control 8

    Cluster 2: Relatively weak exible/average control 49 5

    Cluster 3: Strong on all perspectives 39 4

    Total 96 100the development of scales and clusters of schools.The scales employed were developed using conrmatory factor

    analyses at respondent level (SEM using AMOS). Table 1 sets outthe results for the different scales (for more details see Vanhoof,2007). The t indicators in the table reveal good to very goodscales. The Cronbachs alpha values at respondent and school level(in brackets) also point in the same direction. Caution is onlyindicatedwith regard to the variable socio-constructivistic vision.

    In order to include the impact of the effectiveness perspectivesas a school characteristic in the analyses, schools were clustered onthe basis of their average scores on these perspectives. The reasonfor this approach are indications at school level of possibleproblemswith regard tomulti-collinearity, which is not the case atrespondent level. A hierarchical clustering of the 96 schoolsresulted in a 3 cluster solution (see Table 2). The three clusterswhich we have identied are Strong on all perspectives,Relatively weak exible/average control and Strong exible/weak control. These groups represent 40.6%, 51% and 8.3% of thetotal number of schools, respectively.

    Cronbachs alpha

    p AGFI CFI RMSEA p close Respondent- and

    (school) level

    0.00 0.96 0.99 0.05 0.37 0.92 (0.94)

    0.00 0.98 1.0 0.05 0.48 0.82 (0.88)

    0.00 0.98 1.0 0.04 0.80 0.93 (0.96)

    0.02 0.99 1.0 0.03 0.93 0.92 (0.96)

    0.01 0.99 1.0 0.03 0.91 0.89 (0.92)

    0.00 0.99 0.99 0.04 0.22 0.60 (0.64)

    0.06 0.98 0.99 0.01 1.0 0.92 (0.97)

    0.00 0.98 0.97 0.04 0.82 0.87 (0.91)

    Effectiveness perspectives (scale scores (ranging from 1 to 5))

    Human relations Internal process Rational goal Open system

    % 4.34 3.69 3.43 3.88

    % 4.04 3.91 3.82 3.57

    % 4.42 4.11 3.92 3.95

    4.22 3.97 3.83 3.75

  • s tow

    J. Vanhoof et al. / Studies in Educational Evaluation 35 (2009) 2128 25Results

    Wewill start by presenting a number of descriptive results andthen go on to test the impact of the school and respondent-relatedcharacteristics on attitudes towards self-evaluation. This will bedone using multi-level analyses.

    Descriptive results

    Table 3 shows how the respondents stand in relation to self-evaluation. The frequency distribution shows the opinions ofthe various respondents on a continuum between two contrastingviewpoints. Respondents who score 1 tend towards the left-handvariant (for example: self-evaluation is subjective); respondentswho score 6 tend towards the right-hand variant (for example: self-evaluation is objective). We found that, on average, respondentsexpressed themselves relatively positively with regard to self-evaluation. Around three quarters of the respondents subscribed tothe view that one learns a lot from it and that it results in betterteaching and better management. However, despite these positiveexpectations with regard to self-evaluation, self-evaluation is notpopular. The question as to whether self-evaluation was popularamong the majority of team members obtains the lowest averagescore of all (Av. = 2.75). It appears that this negative attitude cannotreally be attributed to an unduly limited expectationwith regard toresults. In fact, the items which score low are those which relate toprocess characteristics. The respondents consistently indicate thatself-evaluation takes up a lot of time (Av. = 3.27); that it is asubjective phenomenon (Av. = 3.39); and that it is difcult to carryout (Av. = 3.56). On average these items score below or at themidpoint of the evaluation scale (i.e. 3.5).

    Both the frequency distribution and the standard deviation of

    Table 3Frequencies (%), mean scores and standard deviation for the items of the Attitude

    Self-evaluation 1

    . . . tells us nothing new/teaches us a great deal 4

    . . . does not result in better teaching/results in better teaching 4

    . . . takes up a lot of time/takes up very little extra time 11

    . . . only involves a few people/involves everybody 6

    . . . does not lead to better management/leads to better management 6

    . . . is difcult to interpret/is easy to understand 7

    . . . is not popular with the majority of team members/is popular 18

    . . . is difcult to carry out/can be carried out relatively easily 6

    . . . is not cost-effective/is cost-effective 8

    . . . depends on chance/results in a reliable picture 4

    . . . is subjective/is objective 10

    . . . is a snapshot/represents our development correctly 8

    Scale score Attitude towards self-evaluationthe average scores (see Table 3) reveal that there is a considerablevariation in the judgements expressed by the respondents for eachitem. The scale attitude towards self-evaluation exhibits anaverage of 3.68 and a standard deviation of 0.95. This variationappears to be related to a number of background characteristics ofschools and respondents or at least this is what the multi-levelanalyses reveal.

    Explanatory multi-level analyses

    We used multi-level analyses in order to explain the differencesin respondents answers, which enabled us to arrive at a decom-position of respondents answers with regard to their attitudestowards self-evaluation into an individual component and a schoolcomponent. When interpreting the results yielded by this analysis,we considered the basic model, gross model and net model (seeTable 4). In this table we give the estimate and standard error (inbrackets) for each regression coefcient. Statistically signicanteffects are shown in bold type. Finally, with regard to interpretationit is also important to point out that all the variables in the multi-level models have been standardized.

    The basic model reminds us that amulti-level analysis is indeednecessary. The variances at both school and respondent level aresignicantly different from0.7% of the total variance in the attitudeof the respondents towards self-evaluation can be attributedto differences between schools. The remaining 93% must beattributed to differences between respondents within schools,which is an important nding in itself. The differences withinschools are much greater than the differences between schools.

    The gross models represent the effect of each of the individualcharacteristics at school and respondent level without controllingfor other characteristics. Independently of the question as to whatthe differences found have to be attributed to, the gross modelsgive us an idea of differences between educational levels, betweenhead teachers and team members, and between schools with andwithout support. We mainly found statistically signicant effectsfor respondent-related characteristics. At school levelwe only nd adifference between two school clusters with regard to theeffectiveness perspectives. Respondents from schools which scorestrongly on the different effectiveness perspectives score 0.32 SDhigher on the attitude towards self-evaluation than schools whichscore low on the management-orientated effectiveness perspec-tives. The other differences between these school clusters are notstatistically signicant. At respondent level it appears that it isprimarily the position held in the school which is of mostimportance. Team members who are involved in managing theschool concerned (i.e. head teachers, deputy head teachers andmiddle managers) score 0.59 SD higher than other team members(i.e. teachers, Equal Educational Opportunities scheme teachers

    ards self-evaluation-scale (n = 2686).

    2 3 4 5 6 Mean SD

    8 13 31 29 17 4.24 1.29

    8 12 28 31 16 4.22 1.32

    22 23 24 16 5 3.27 1.38

    16 20 24 21 12 3.75 1.43

    10 11 24 34 16 4.18 1.40

    14 22 30 21 6 3.64 1.31

    29 24 21 7 1 2.75 1.24

    17 24 27 19 7 3.56 1.33

    12 22 29 21 6 3.62 1.34

    9 20 33 28 5 3.89 1.18

    19 22 27 19 4 3.39 1.35

    14 18 30 26 6 3.69 1.34

    3.68 0.95and support staff). The attitude of respondents towards self-evaluation is also more positive if respondents have enjoyed someform of support. Thismeans that the respondent concerned has, forexample, recently consulted relevant professional literature or hasattended a training course on self-evaluation. Finally, each of theschool-related indicators included yields a statistically signicanteffect on the attitude towards self-evaluation. The strongest effectsare found for the respondents perception regarding the degree ofin-depth learning and knowledge creation in their school. For eachdifference of one standard deviation point between respondents onthese indicators, they will, on average, score with a difference of0.36 SD and 0.33 SD, respectively, on attitudes towards self-evaluation. Other important ndings are also that differentbackground characteristics of schools and respondents do notshow a statistically signicant relationship with the attitude ofrespondents towards self-evaluation. There are no statisticallysignicant differences between the different educational levels

  • lSE




    J. Vanhoof et al. / Studies in Educational Evaluation 35 (2009) 212826(primary and secondary education) and different school networks.The specic branch in which respondents in secondary schools

    Table 4Multi-level analyses Attitude towards self-evaluation.

    Fixed Null mode

    Estimate (

    Intercept 0.025 (

    School related predictors

    Primary (vs. secondary education)

    No self-evaluation experience (vs. SE experience)

    School size (number of teachers)

    School culture cluster 1 (vs. cluster2)

    School culture cluster 3 (vs. cluster2)

    Respondent related predictors

    School leader (vs. team member)

    Use of external support (vs. no use)


    Perception of human relations

    Perception of internal process

    Perception of open system

    Perception of rational goal

    Perception of own learning

    Perception of single vs. double loop learning

    Perception of knowledge creation


    Variance at school level 0.071 (

    Variantie at respondent level 0.936 (

    R2Reduction of variance at school level

    R2Reduction of variance at respondent level

    Model t

    Deviance 7548

    Number of schools 96

    Number of respondents 2686mainly work (general, technical or vocational education); whetheror not they have experience as a school with self-evaluation;and the size of the school do not show a relationship to theirattitude towards self-evaluations. The same applies to the age ofrespondents.

    Finally, we come to the net model. In interpreting the netmodel, we should start by pointing out that this model improvesthe predictive power by 30% at school level and by 32% atrespondent level in comparisonwith the nullmodel (the R2 valuesare calculated according the method used by Snijders and Bosker(1999)). The rst interpretative nding from the net model is thatthe position held by the respondent in the school concerned alsoappears to play a very important role, after control for othercharacteristics. Independently of the question of what they thinkabout learning and how they perceive the school culture,members of the school management team score 0.45 SD higherthan the other team members. The use of some form of supportalso exhibits a relationship with a more positive attitude towardsself-evaluation. Finally, the net model reveals that themajority ofschool-related indicators, even after control for other character-istics, have a statistically signicant effect on the attitude towardsself-evaluation, although the regression coefcients are lowerthan in the gross model. The greatest impact is exerted by in-depth learning and degree of knowledge creation. It is worthnoting that the perception of the importance of human relation-ships after control has a statistically signicant negative effect.After control for other school and respondent characteristics, ittherefore appears that respondents have a less positive attitudetowards self-evaluation in so far as they are more of the opinionthat their school culture is characterized to a greater degree by astrong emphasis on this effectiveness perspective (i.e. internalexibility).Conclusion

    Gross model Net model

    ) Estimate (SE) Estimate (SE)

    034) 0.113 (0.039)

    0.124 (0.067)

    0.026 (0.070)0.049 (0.038)0.101 (0.129)0.316 (0.062) 0.131 (0.057)

    0.589 (0.072) 0.453 (0.068)0.213 (0.040) 0.108 (0.041)0.028 (0.019)0.171 (0.022) 0.061 (0.029)0.139 (0.022)0.192 (0.021) 0.057 (0.027)0.169 (0.021) 0064 (0.025)0.274 (0.020) 0.154 (0.021)0.359 (0.026) 0.210 (0.025)0.326 (0.022) 0.170 (0.025)

    016) 0.031 (0.010)

    026) 0.655 (0.022)





    1852The descriptive results suggest that schools are reluctant tocarry out self-evaluations. Self-evaluation is still widely regardedas something strange. It thus appears that modes of reection such as the self-evaluations that are the subject of this presentarticle which require a considerable openness to others andimply a degree of vulnerability, are still unpopular with teammembers. Despite this, however, there is a strong belief in thepossibilities of self-evaluation and the respondents have positiveexpectations with regard to self-evaluation. There is a recognitionthat the process can produce valuable results, but the nature of theactivities involved in self-evaluation has the effect of puttingpeople off initiating a self-evaluation or even taking part in anexternally imposed self-evaluation. The causes of this wariness arethus to be found more with the process than with the product ofself-evaluation. There is a perception, for example, that self-evaluation takes up a lot of time and that it is difcult to carry out.People thus believe in the potential power and value of self-evaluations, but the accompanying process causes many tohesitate when it comes to actually carrying one out. It is alsopossible that working practices and a lack of openness in manyschools act as an obstacle to speaking freely about internalconcerns and problems.

    From our multi-level analyses of attitudes towards self-evaluation we can conclude that our theoretical expectationshave largely been conrmed. Theway inwhich respondents regardself-evaluation processes has not come out of thin air. The analysesclearly demonstrate that the attitude towards self-evaluation islinked to a considerable degree to the characteristics of the broaderfunctioning of the schools concerned. As expected, the attitude ofrespondents is clearly more positive in proportion to the extent towhich the school-related characteristics studied (organizationaleffectiveness and professional learning community) are more

  • J. Vanhoof et al. / Studies in Educational Evaluation 35 (2009) 2128 27pronounced. Only the limited net effect of human relations pointsin another direction. We must not forget, however, that this effectis found after control for the other predictors in the net model. Anexcessive orientation towards human relations might jeopardizethe systematic character of the self-evaluation (Devos & Verhoe-ven, 2003). Furthermore, the research hypothesis on the differ-ences between principals and teachers is also conrmed bythe data. The other background characteristics of schools andrespondents studied appear, however, not to be statisticallysignicant. We also note the large variation between respondentswithin schools. The differences within the schools are greater thanthe differences between schools, although it should be noted thatthe differences between schools are also statistically signicant.


    The suggestion that some schools have a positive attitudetowards self-evaluation, whereas other schools team membershave a negative attitude requires a certain degree of qualication.Our study clearly demonstrates that there are also considerabledifferences within schools. With regard to attitudes towards self-evaluation, it appears that only 7% of the total variation inrespondents scores can be attributed to differences betweenschools. The remaining 93% must therefore be attributed todifferences between respondentswithin schools. Thus the image ofone school where everyone has a positive attitude and anotherwhere everyone has a negative attitude does not correspond toreality: there are, in fact, major differences between respondentsin every school. These implications go further than just thedistinction between head teachers and team members: there arealso differences between team members. This study shows that itis dangerous to think in terms of stereotypical images of schoolsand the peoplewhowork in them. Responding appropriately to thecontext when implementing self-evaluation processes thereforerequires not just a school-related approach, but also a personalizedapproach.

    Head teachers express themselves considerablymore positivelywith regard to self-evaluation-related characteristics than teammembers. Our contention is that this must not be attributed or atleast not solely to head teachers having a more positive view ofwhat goes on in their school. It is not unthinkable that headteachers express themselvesmore positively both because they aremore closely involved in the self-evaluation process and alsobecause they are in a position to determine certain aspects of theself-evaluation themselves, while ordinary teachers are not. Theymay well, for example, have a better view of the communicationand participatory initiatives which are set up as part of the self-evaluation or they are better placed to make the link between theself-evaluation and overall school vision (Devos & Verhoeven,2003; Van Petegem et al., 2005). The schoolmanagement team can,for example, set clear action points for themselves (and carry theseout)somethingwhich the other teammembers do not experienceat rst hand or of which they may not even be aware. A self-evaluation which is regarded as having been a success by themanagement team in a school may, therefore, be regarded by theother team members as having been a failure (Vanhoof, 2007).Consequently, attitudes towards self-evaluation on the part of thetwo groups can be inuenced in different directions. The highscores obtained by head teachers do not, therefore, necessarilyhave to be viewed as suspect, nor do they necessarily represent astatistical quirk (Watling & Arlow, 2002). The scores might accordperfectly with the reality of self-evaluation processes.

    Attitudes towards self-evaluation are not something separatefrom the broader functioning of the school. Thismeans that schools if there is a desire to build in guarantees for successful self-evaluation also need to score sufciently high on these elements(Schildkamp, 2007). The school-related characteristics are, as itwere, an indication of their readiness, of their being at the startposition or an indication that what might be called the basiccompetencies which a school must possess in order to get the fullbenet from self-evaluation are in place. Judging from the attitudestowards self-evaluation observed it would appear that the pre-conditions required to conduct a successful self-evaluation hadstill not been sufciently achieved in the schools studied. For thisreason, schools should make reection a strategic objective as partof the schools functioning. Carrying out self-evaluations of thetype which we have studied is, in effect, a systematic form ofreection at team or school level. We therefore see reection as astep towards and a pre-condition to the use of instrumentalizedself-evaluations. Working on successful self-evaluations thusrequires, rst and foremost, a sufciently developed reectivecapacity (York-Barr, Sommers, Ghere, &Montie, 2001). Individuals,groups and organizations can only learn if they are prepared toreect on their own practice (Mitchell & Sackney, 2000). However,the dominant culture in many schools is one of action rather thanwords, with little or no attention paid to reection. This is oftenexcused on the basis of lack of time. We have found, however, thatattitudes towards forms of reection also act as an impediment toreection. Self-evaluation is not a popular activity among manyteachers because it is seen as a form of social control. Thisperception of self-evaluation as something threatening has alsobeen observed elsewhere (Clift, Nuttall, &McCormick, 1989). Thereis often a lack of openness within the team and an unwillingness tolook critically at their own performance. A change of mentality istherefore also required. The attention to school culture and thecharacteristics of professional learning communities in botheducational practice and educational research is thus justied inthe light of the ndings of the present study.

    At present, team members all too often get involved in a self-evaluation process because it is expected of them (Van Petegemet al., 2005). Self-evaluation is carried out in schools in the rstplace because it is seen as an external necessity by the schoolmanagement team. This is a fundamentally different approachfrom carrying out a self-evaluation because it is perceived as aninternal necessity. External necessities are shortcomings whichare felt by others; whereas internal necessities are shortcomingswhich are felt and recognized by the people involved themselves.This distinction is crucial in implementing innovatory initiativessuch as carrying out self-evaluations. Obliging team members totake part in self-evaluation is what is known as the adoptionapproach. In other words, team members are to a large degreeregarded as members of a rational organization charged withimplementing what others deem to be important (and henceworthy of innovation). The process of implementation is thus indanger of receiving too little attention and as a result innovationsare in danger of ending up having scant connection with actualpractice (Guthe, 1997). If the intention is to nd a way ofintroducing self-evaluations in schools, this will require focusingattention on the interaction that exists between the speciccontent of the innovation (as teammembers share a responsibilityfor quality assurance) and the characteristics of the individualteam members. The perceptions of those involved, for example,and in particular the signicance which is given to self-evaluation,is of major importance for the success of this innovative process(Fullan, 1991). In the present study we have found an unwilling-ness to carry out self-evaluations. Staessens (1993, p. 127) has thefollowing observation: A schools reaction to a change can, in part, beunderstood with reference to the schools individual culture, or to bemore precise, the gap between the schools existing values and normsand the values and norms on which the innovation is based. This gapis currently still considerable. Furthermore, so long as innovatoryinitiatives such as self-evaluation are still perceived by many team

  • members as something which has been imposed on them and,thus as something threatening, this gap is in danger of becomingeven greater.


    Argyris, C., & Schon, D. A. (1978). Organizational learning: A theory of action perspective.Reading: Addison-Wesley.

    Blok, H., Sleegers, P., & Karsten, S. (2005). Schoolzelfevaluatie in het basisonderwijs; eenterugblik op de zelfevaluatiefase van Ziezo [School self evaluation in primary educa-tion]. Amsterdam: SCO-Kohnstamm Instituut.

    Chou, S.-W., & Tsai, Y.-H. (2004). Knowledge creation: Individual and organizationalperspectives. Journal of Information Science, 30(3), 205218.

    Clift, P. S., Nuttall, D. L., & McCormick, R. (Eds.). (1989). Studies in school self evaluation.London: The Falmer Press.

    Creswell, J. W. (2005). Educational research: Planning, conducting, and evaluating quan-titative and qualitative research. New Jersey: Pearson Prentice Hall.

    Devos, G., Verhoeven, J., Van den Broeck, H., Gheysen, A., Opbrouck, W., & Verbeeck, B.(1999). Interne zelfevaluatie-instrumenten voor kwaliteitszorg in het SecundairOnderwijs. [Internal self-evaluation instruments for quality care in secondary educa-tion]. Leuven, KUL: Departement Sociologie.

    Devos, G., & Verhoeven, J. (2003). School self-evaluation conditions and caveats Thecase of secondary schools. Educational Management & Administration, 31(4), 403420.

    Donnenberg, O. H. J. (1999). Nieuwe werkstructuren - nieuwe leervormen [Newworking structures New types of learning]. In J. M. J. Dekker (Ed.), Opleidersin organisaties [Educators in organizations] (pp. 116137). Deventer: Kluwer.

    Fullan, M. (1991). The new meaning of educational change. London: Cassell.Grifth, J. (2003). Schools as organizational models: Implications for examining school

    Mortimore, P., Sammons, P., Stoll, L., Lewis, D., & Ecob, R. (1988). School matters: Thejunior years. Somerset: Open Books.

    MVG. (2006). Onderwijsspiegel Schooljaar 20042005. Verslag over de toestand van hetonderwijs [Educational Mirror 20042005]. Brussels: Ministry of the Flemish Com-munity.

    Nonaka, I., & Takeuchi, H. (1995). The Knowledge-creating company. How Japanesecompanies create the dynamics of innovation. Oxford: University Press.

    Patriotta, G. (2004). Organizational knowledge in the making: How rms create, use, andinstitutionalize knowledge. Oxford: University Press.

    Petty, R. E., & Wegener, D. T. (1998). Attitude change: Multiple roles for persuasionvariables. In D. Gilbert, S. Fiske, & G. Lindzey (Eds.), The handbook of socialpsychology. New York: McGraw-Hill.

    Quinn, R., Faerman, S. R., Thompson, M. P., & McGrath, M. (2003). Becoming a mastermanager. A competency framework. New York: Wiley.

    Quinn, R., & Rohrbaugh, J. (1983). A spatial model of effectiveness criteria: Towards acompeting values approach to organizational analysis. Management Science, 29,363377.

    Sammons, P., Hillman, J., & Mortimore, P. (1995). Key characteristics of effective schools:A review of school effectiveness research. London: Ofce for Standards in Education.

    Saunders, L. (2000). Understanding schools use of value added data: The psychologyand sociology of numbers. Research Papers in Education, 13(3), 241258.

    Scheerens, J. (2004). The evaluation culture. Studies In Educational Evaluation, 30(2),105124.

    Schildkamp, K. (2007). The utilisation of a self-evaluation instrument for primary educa-tion. Enschede: PrintPartners Ipskamp.

    Senge, P. (2001). Schools that learn: A fth discipline eldbook for educators, parents, andeveryone who cares about education. New York: Doubleday.

    Simons, P. R. J., & Ruijters, M. (2001). Learning professionals: Towards an integratedmodel. Paper presented at the biannual conference of the European Association forResearch on Learning and Instruction, Fribourg.

    Snijders, T., & Bosker, R. J. (1999). Multilevel analysis: An introduction to basic andadvanced multilevel modelling. London: Sage.

    J. Vanhoof et al. / Studies in Educational Evaluation 35 (2009) 212828effectiveness. The elementary School Journal, 104(1), 2947.Guthe, K. A. K. (1997). Indicatoren van schoolontwikkeling [Indicators for school devel-

    opment]. Enschede: OCTO, University of Twente.Herman, L., Aschbacher, R., &Winters, L. (1992).Apractical guide to alternative assessment.

    Alexandria, VA: Association for Supervision and Curriculum Development.Kellgren, C. A., & Wood, W. (1986). Access to attitude-relevant information in memory

    as a determinant of attitude-behaviour consistency. Journal of Experimental SocialPsychology, 38.

    Kyriakides, L., & Campbell, R. J. (2004). School self-evaluation and school improvement:A critique of values and procedures. Studies In Educational Evaluation, 30(1), 2336.

    Levine, D. U., & Lezotte, L. W. (1990). Unusually effective schools: A review and analysis ofresearch and practice. Madison: National Centre for Effective Schools Research andDevelopment.

    Maslowski, R. (2001). School culture and school performance. Enschede: Twente Uni-versity Press.

    McBeath, J. (1999). Schools must speak for themselves. London: Routledge.McBeath, J., Meuret, D., Schratz, M., & Jakobsen, L. B. (1999). Evaluating quality in school

    education. Brussels: European Commission.Meuret, D., & Morlaix, S. (2003). Conditions of success of a schools self-evaluation:

    Some Lessons of an European experience. School Effectiveness and School Improve-ment, 14(1), 5371.

    Mitchell, C., & Sackney, L. (2000). Profound improvement. Building capacity for a learningcommunity. Lisse: Swets & Zeitlinger Publishers.Staessens, K. (1993). Identication and description of professional culture in innovativeschools. Qualitative Studies in Education, 6, 111128.

    Van Petegem, P., Verhoeven, J. C., Buvens, I., & Vanhoof, J. (2005). Zelfevaluatie enbeleidseffectiviteit in Vlaamse scholen. Het gelijke onderwijskansenbeleid als casus [Selfevaluation and policy effectiveness of schools. A case study of the Flemish equal chancespolicy]. Ghent: Academia Press.

    Vanhoof, J. (2007). Zelfevaluatie Binnenstebuiten. Een onderzoek naar het verloop en dekwaliteit van zelfevaluaties in Vlaamse scholen [Self Evaluation Inside Out. The Processand Quality of Self-Evaluation in Flemish Schools]. Antwerp: Antwerp University.

    van Aanholt, T., & Buis, T. (1990). De school onder de loep. [The school under a magnifyingglass]. Culemborg: Educaboek.

    von Krogh, G., Ichijo, K., & Nonaka, I. (2000). Enabling knowledge creation. How to unlockthe mystery of tacit knowledge and release the power of innovation. Oxford: Uni-versity Press.

    Watling, R., & Arlow, M. (2002). Wishful thinking: Lessons from the internal andexternal evaluations of an innovatory education project in northern Ireland.Evaluation & Research in Education, 16(3), 166181.

    Wenger, E. (1998). Communities of practice: Learning, meaning and identity. Cambridge:Cambridge University Press.

    York-Barr, J., Sommers, W. A., Ghere, G. S., & Montie, J. (2001). Reective practice: Anaction guide for educators. Thousand Oaks, CA: Corwin Press.

    Attitudes towards school self-evaluationIntroductionTheoretical framework and operationalizationsDependent variable: attitudes towards self-evaluationIndependent variablesSchool culture: organizational effectiveness of the schoolThe school as a professional learning community

    MethodologyInstruments and school clusteringResultsDescriptive resultsExplanatory multi-level analyses



View more >