It is the cache of ${baseHref}. It is a snapshot of the page. The current page could have changed in the meantime.
Tip: To quickly find your search term on this page, press Ctrl+F or ⌘-F (Mac) and use the find bar.

F0000024 Evaluation of Teaching and Learning Strategies
 
Google
 
 

Evaluation of Teaching and Learning Strategies

Sybille K Lechner BDS, MDS FRACDS, FPFA, FICD

Head, Discipline of Removable Prosthodontics, School of Dental Studies, University of Sydney

Abstract - With the growing awareness of the importance of teaching and learning in universities and the need to move towards evidence-based teaching, it behooves the professions to re-examine their educational research methodology. While the what, how and why of student learning have become more explicit, the professions still struggle to find valid methods of evaluating the explosion of new innovation in teaching/learning strategies. This paper discusses the problems inherent in applying traditional experimental design techniques to advances in educational practice.

Keywords:- Medical education, dental education, evaluation, best evidence education

Introduction

     The mainstream of tertiary education has seen a massive transformation over the last few decades1 and most educational facilities have now made the quantum leap from trying to be ‘good teachers' to making the learning process more readily available to students. They have also recognized the distinction between deep and surface learning.2

     The what, how and why of student learning have become more explicit. What students now need to know is directly related to the information explosion, which is evident in every field of study. The goalposts have changed from teaching facts, to helping students to learn how to find relevant information, how to assess it and how to organize disparate information into a cohesive whole. How students learn is becoming clearer and the move towards student centered learning is finding support in the physiological sciences.3-6 Why students learn has not changed, nor does it seem likely to; students learn primarily to graduate and educational bodies now acknowledge the importance of the link between program aims and assessment, formative assessment as a valuable teaching/learning resource and summative assessment as a motivational force.7

     These important insights have triggered a virtual explosion of innovation in teaching/learning strategies and it is becoming increasingly urgent to find ways of evaluating them. However, evaluation of teaching strategies is problematic. The ability of bright people to learn what they need to know despite any curriculum cannot be discounted and high-aptitude students tend to succeed regardless of the instructional strategy used..8,9 This does not mean that educators should consider themselves free to propose changes in an ad hoc manner. There is a need for educational research and ongoing evaluation must be considered to be a fundamental part of educational advance.10

Best Evidence

     In an attempt to follow the current trend of evidence based medicine, educational bodies are concerned with building a body of best evidence in medical education (BEME).11 A meeting in London was held in 1999 highlighting the need for evidence-based teaching.12 Although this initiative is in the medical field, successful strategies could well be extended into the dental field. Hart and Harden13 identified six steps in the practice of evidence-based teaching: framing the question, developing a search strategy, producing the raw data, evaluating the evidence, implementing change and evaluating it. This group is working to provide guidelines for educational research. However, it is increasingly evident that synthesizing and reviewing evidence is a complex matter.14 Searching databases using the descriptors `education’ and `evidence-based’ yields few articles on the concept of best evidence medical education.15 This has led to a suggestion that BEME be renamed BEAME (best evidence available in medical education).16 Even with this proviso, it is difficult to see how the new goals are to be accomplished. The difficulty of evaluating any educational philosophy in a scientific manner is highlighted by the many different methodologies used in attempts to prove the efficacy of problem based learning (PBL) as a lifelong learning resource.17

Subject Pool

     One of the most frustrating aspects of evaluating education is that academics in the medical and dental field, who have been trained in the rigid scientific methods which are mandatory for evaluating treatment modalities, are tempted to try to transpose such methods to evaluating education. Such attempts, while superficially pleasing in the numerical data they supply, are ultimately unreliable. The rigid quantitative scientific method of controlled experimentation cannot be considered valid in an environment where the variables of the subject pool are almost as great or greater than the pool itself, the student pool in any one year of a dental or medical school being statistically minute.

     While the subject pool is small, prior knowledge, motivation, opportunity, access to materials, the Hawthorne effect, time constraints, emotional status and even financial pressure can count among the influences affecting learning.Including sequential years and/or different schools can confuse the issue further by adding the variable of different student cultures. Added to this, the value of control groups in educational evaluation is highly questionable. Random allocation of students may address initial systematic differences between the experimental and control subjects. However, in an educational context, the "randomness" soon becomes corrupted since there are rarely placebos in education. Students have opinions and will have an attitude towards whichever group they are in. This, in itself necessarily biases the outcome. Nor is it possible to limit students in the interchange of ideas, as it is to limit types of medication or treatment in clinical trials. Students in one group will find it simple to access friends’ learning resources if they are interested in doing so.18 The authors have heard of one study testing a computer aided learning (CAL) program where the CAL students did far better in a test than the control, traditionally taught students. However a chance conversation with a student then elicited the fact that the CAL program was so abysmal that students in that group panicked and spent many hours in the library studying that subject. While this story is anecdotal, the possibility of such an occurrence must surely give the serious educational researcher pause when considering the efficacy of control groups as a tool for evaluation in education.

Outcomes

     The other great difficulty is in delineating a clear definition of outcomes. Wilkes and Bligh19 group several types of evaluation into student oriented, program oriented, institution oriented and stakeholder oriented. The indicators cover a wide area, ranging from attendance at class, through patient satisfaction, questionnaires, test results and peer evaluation. Several studies have attempted evaluation using examination,20-24 the number of student inquiries regarding the levels of knowledge required for examinations,25 follow-up surveys26,27 and self-evaluation by the students.28-30

     Norman and Schmidt31 are adamant that while outcomes can be measured, they can only be measured if they are small. They contend that big outcomes cannot be measured in any meaningful way as the variables are too great and are so complex and multi-factorial, with so many unseen interacting forces, using outcomes so distant from the learning setting, that any predicted effects would inevitably be diffused by myriad unexplained variables.

     The simplest measurement of outcome is by examination. Examination results can be shown numerically and sit well with statistical analysis. However, they cannot be relied on to give a true evaluative picture. For example, the fact that 98% students could attain a desired standard of knowledge if a lecture series was replaced by a CAL program.32 cannot be extrapolated to include any other groups of students and cannot be taken to show that the CAL program was a "better" way of teaching. Nor can examination results measure deep learning and lifelong learning, which must now be accepted as ultimate learning goals. Methods for measuring these parameters have not yet been developed.33 For example, a study showing that the students from a school with a problem based learning PBL based curriculum (McMasters) had a better knowledge of hypertension 10 years after graduation than those from a traditional school (Toronto), can be open to several interpretations, further investigation showing that the cardiovascular research was an outstanding accomplishment of McMasters University and that similar results may not have been seen if other fields had been investigated. Even if this were the case, there is no "proof" that this was a direct effect of PBL or an indirect effect mediated by other aspects of the course.34 The possible causes are seemingly endless.

     Currently, the most pragmatic approach in educational evaluation is to focus on students' perception of their experience with a learning program, and this approach has been used in several studies.35, 20,23,24.36-39 Enjoyment and success engender a winning cycle in the learning environment. If teaching resources can involve students and lead them to be successful in their endeavors, they are more likely enjoy their tasks and want to become even more involved.

     Exploring students perceptions lends itself to a qualitative research methodology, the focus of which is understanding rather than measuring. Qualitative research can tap into the students' psyche to give a depth of understanding that is the key for ongoing educational development. It is particularly germane to evaluation of the formative aspect of a program and should be part of the development of any educational resource.35,40 Good qualitative research goes beyond reporting what people say to why they are saying it. It makes a distinction between public and spoken attitudes which are socially acceptable, or those which may have no vocabulary or are difficult to verbalize and are therefore suppressed, and private attitudes which may not be socially acceptable and are therefore consciously repressed.41 Qualitative researchers in the commercial field require a high level of expertise to avoid a host of pitfalls and qualitative research methodology has its own strict guidelines. In an educational context it would be imperative that questionnaires dealing with students' perceptions are anonymous and that focus groups be conducted by people known to be far removed from any summative assessment results.

     Observation of students involved in a program is another invaluable developmental device. However, this is a subjective activity and, with a single observer, is only as valid as the perception of that observer.

Quantitative analysis of factors which have been found to be relevant to students via this exploratory research can then proceed, each small positive outcome accumulating to build a broad picture of what is likely to produce success.

     Triangulation, which is the application and combination of several research methodologies in the study of the same phenomena can be employed in both quantitative(validation) and qualitative(inquiry) studies. It has been proposed that, by combining multiple observers, theories, methods and empirical materials, researchers can hope to overcome the weakness or intrinsic biases of any particular methodology and the problems that come from single method, single-observer, single-theory studies.42

Summary

     With the growing awareness of the importance of teaching and learning in universities, it behooves the professions to re-examine their educational research methodology. Numbers and statistical significance are very comforting to minds trained in the rigors of laboratory experimentation and clinical trials. However, this methodology cannot be imported wholesale into educational research. There is a need for close scrutiny of the validity of these methods in the very different field of education. The key to good research is flexibility and an understanding of the limitations of any research methodology in any given field. Hopefully an accumulation of the "small outcomes" which Norman and Schmidt envisage will eventually result in a broad spectrum of probability across a multitude of studies.

     Currently, the most realistic indicator of a program's success is the students' own perception of their learning. The work of the BEME group and other interested health educators will, in time, produce more objective parameters. These parameters will not be unrealistic shadows of those used in clinical and laboratory trials of the health professions. They will be specific to the educational field and recognize the unique aspects of the tertiary student subject pool and the complex nature of the expected outcomes.

References

  1. Laurillard, D. 1993, Re-thinking University Teaching: A Framework for the Effective Use of Educational Technology. London: Routledge
  2. Marton F, Saljo R. On qualitative differences in learning I-outcome and process. Brit J of Educ Psych 1976:46;4-11
  3. McCrone J. Wild minds: The dynamics of the neural code. New Scientist. 1997;156:26-30
  4. Reese AC Implication of results from cognitive science research for medical education. Acad Med 1996 Sep;71(9):988-1001
  5. Anderson MC, Neely JH Interference and inhibition in memory retrieval Cited in Reese AC Acad Med 1996 Sep;71(9):988-1001
  6. Regehr G, Norman GR. Issues in cognitive psychology: Implications for professional education. Academic Medicine 1996; 71: 988-1001
  7. Biggs J. What the Student Does: Teaching for Enhanced Learning. Higher Education Research & Development. 1999;18 (1): 57-75
  8. Woodward CA. Problem based learning in Medical education. Advances in health Sciences Education. 1996;1:83-94
  9. Cronbach, LJ, Snow RE. Aptitudes and instructional methods: A handbook for research on interactions. Irvington, New York, NY., 1977
  10. Van Der Vleuten CPM, Dolmans D.H.J.M,. Scherpbier A.J.J.A. The need for evidence in education. Medical Teacher, 2000; 22 (3): 246-250
  11. Hart I. Best Evidence Medical Education (BEME) (editorial). Medical Teacher 1999;21(5): 453-454.
  12. Best Evidence Medical Education (BEME): report of meeting: 3-5 December1999. London, UK. Medical Teacher. 2000.; 22(3): 242-245.
  13. Hart IR, Harden RM. Best evidence medical education (BEME): a plan for action. Medical Teacher. 2000; 22( 2):131-135
  14. Wolf FM. Lessons to be learned from evidence-based medicine: practice and promise of evidence-based medicine and evidence-based education. Medical Teacher. 2000;22( 3): 251-259
  15. Harden R.M. Lilley, PM. Best evidence medical education: the simple truth. Medical Teacher. 2000; 22( 2): 117-19
  16. Bligh J, Anderson MB. 2000 Medical teachers and evidence (Editorial). Medical Education. 2000;34:162-163
  17. Albanese, M. and Mitchell, S. Problem-based learning: a review of the literature on its outcomes and implementation issues. Academic Medicine1993: 68 (1): 52-81.
  18. Sefton A . Evaluating the new technologies. Proceedings of Evaluating New Teaching Technologies, Uniserve Science, Sydney, pp13-19
  19. Wilkes M, Bligh J. Evaluating educational interventions. Brit Med J 1999;318:1269-1272)
  20. Shellhart WC and Oesterle LJ: Assessment of CD-ROM technology on classroom teaching. J Dent Educ 1997;61:817-820
  21. Login GR, Ransil BJ, Meyer MC et al: Assessment of preclinical problem-based learning versus lecture-based learning. J Dent Educ:1997;61:473-9,
  22. Bachman MW, Lua MJ, Clay DJ et al: Comparing traditional lecture vs. computer based instruction for oral anatomy. J Dent Educ 1998;62:587-591
  23. Lindquist TJ, Clancy JMS, Johnson LA et al: Effectiveness of computer-aided partial denture design. J Prosthod 1997;6:122-127
  24. Mulligan R, Wood GJ: A controlled evaluation of computer assisted training simulations in geriatric dentistry. J Dent Educ 1993;57:16-24
  25. Roberts BV. Cleary EG. Roberts JV: Graded check lists to assist undergraduate students in self-directed learning and assessment in general and systematic anatomical pathology. Pathology 1997;29:370-3,
  26. Lie N: Students' evaluation of education in psychiatry. (Abstract)Tidsskrift for Den Norske Laegeforening. 1994;1:50-1, .
  27. Nieman JA: Assessment of a prosthodontic course for dental hygienists using self-instructional learning modules. J Dent Edu.: 1981;45:65-7,
  28. Bachman MW, Lua MJ, Clay DJ et al: Comparing traditional lecture vs. computer based instruction for oral anatomy. J Dent Edu 1998;62:587-591
  29. Lary MJ, Lavigne SE, Muma RD et al: Breaking down barriers: multidisciplinary education model. J Allied Health 1997;26:63-9
  30. Long AF, Mercer PE, Stephens CD et al: The evaluation of three computer-assisted learning packages for general dental practitioners. Brit Dent J 1994;177:410-5
  31. Norman HG, Schmidt GR, Effectiveness of problem-based learning curricula: theory, practice and paper darts. Medical Education 2000 ;34(9): 721-728
  32. Lechner SK, Lechner KM, Thomas GA. Evaluation of a computer aided learning program in prosthodontics. J Prosthodont 1999:7 (2):100-105
  33. Boud D and Feletti GE. The challenge of problem based learning. Second Ed. 1997. Kogan Page Ltd. London. P11.
  34. Shin JH, Hanyes RB, Johnston M. The effect of problem-based self-directed undergraduate education on life-long learning. Clinical Investigative Medicine. In Boud D and Feletti GE. The challenge of problem based learning. Second Ed. 1997. Kogan Page Ltd. London. P298.
  35. Boyd EM, Fales AW. Reflective learning: the key to learning from experience.
    J Humanistic Psychology.1983;23:99-117.
  36. Peters A et al. Learner centered approaches in medical education. Academic Medicine 2000; 75: 470-479.
  37. Schuhbeck M et al.. Long-term Outcomes of the New Pathway Program at Harvard Medical School A Randomized Controlled Trial. European J Dent Edu. 1999; 3(1):35-43.
  38. Manogue M et al Improving student learning in root canal treatment using self-assessment. Internat Endodontic J. 1999;32(5):397-405
  39. Wenzel A, Gotfredsen E. Students' attitudes towards and use of computer-assisted learning in oral radiology over a 10-year period. 1997 Dento-Maxillo-Facial Radiology. 26(2):132-6
  40. Davies P. Approaches to evidence-based teaching. Medical Teacher. 2000; 22(1):14-21
  41. Johari Window, named after Joseph Luft and Harry Ingham. http://www.knowmegame.com /Johari_Window/johari_window.html
  42. Zulkardi. http://www.geocities.com/zulkardi/submit3.html

Acknowledgements

My thanks to Professor A Sefton, Faculty of Medicine, University of Sydney, for the knowledge about educational evaluation she has shared with me and to KMD Harris. Account Manager, The Leading Edge, Market Research Consultants Pty Ltd Sydney for many interesting insights into research strategies.

 Correspondence to

Sybille K Lechner BDS, MDS FRACDS, FPFA, FICD
11A/10 Hilltop Crescent
Fairlight
2094
Australia

Phone: 612 9949 5164
Fax: 612 9211 4958

E-mail slechner@bigpond.net.au


 


Medical Education Online Editor@Med-Ed-Online.org