نوع مقاله : مقاله پژوهشی (کمی )

نویسندگان

1 استادیار گروه علوم تربیتی، دانشکده علوم انسانی، دانشگاه گیلان، رشت، ایران

2 مربی،گروه علوم تربیتی، دانشکده علوم انسانی، دانشگاه گیلان، رشت، ایران

چکیده

هدف پژوهش حاضر، بررسی ویژگی‌های روانسنجی (ساختار عاملی، پایایی، و روایی) پرسشنامه تجربه دوره آموزشی (CEQ- 1997) (Wilson, et al, 1997) در میان دانشجویان دانشگاه گیلان می‌باشد. پژوهش حاضر، از حیطه پژوهش‌های هنجاریابی است که در نیمسال اول تحصیلی 1401- 1400 در دانشگاه گیلان اجرا گردید. جامعه آماری این پژوهش، شامل کلیه دانشجویان ورودی سالهای تحصیلی 1397 و 1398 بودند که با استفاده از روش نمونه گیری تصادفی ساده بر روی 332 نفر از دانشجویان سال سوم و چهارم مقطع کارشناسی دانشکده علوم انسانی دانشگاه گیلان انجام گرفت. با استفاده از تحلیل عامل اکتشافی هفت عامل برای پرسشنامه تجربه دوره آموزشی شناسایی گردید که 71 درصد از واریانس کل را تبیین می‌کند. به علاوه نتایج تحلیل عامل تاییدی روی گویه ها نشان دادکه تمامی گویه ها بار عاملی معنی داری بر عوامل خود داشتند و مدل ساختار هفت عاملی پرسشنامه مورد تأیید قرار گرفت. ضریب روایی ملاکی همزمان از طریق همبستگی بین هر یک از عامل‌های مقیاس تجربه دوره آموزشی با متغیرهای ملاک (رضایت کلی از دوره و پیشرفت تحصیلی) که به لحاظ نظری به آن مرتبط بودند، مورد بررسی و تأیید قرار گرفت. در مجموع یافته‌ها نشان داد که پرسشنامه تجربه دوره آموزشی (CEQ- 1997) از اعتبار مناسب برخوردار است.

کلیدواژه‌ها

عنوان مقاله [English]

The study of psychometric features (Factor structure, validity and reliability) of the Course Experience Questionnaire(CEQ) in guillan university students

نویسندگان [English]

  • Ali Poursafar 1
  • Naghi Raadi afsuran 1
  • Hamid Abdi 2

1 Assistant professor -Departement of education, Faculty of Humanities, University of Guilan, Rasht, Iran

2 Instructor, Department of Educational Sciences, Faculty of Humanities, Gilan University, Rasht, Iran

چکیده [English]

Abstract
The purpose of this study was to determine the psychometric features (Factor structure, validity and reliability) of the Course Experience Questionnaire(CEQ) that have 43 items and developed by Wilson and et al(1997). This research is the sort of normative research carried out in the first semester of the academic year 2021-2022 at Gilan University. The statistical population of this research included all students accepted in the academic years of 2017 and 2018, which was conducted using a simple random sampling method on 332 third and fourth year undergraduate students of the Faculty of Humanities of Guilan University. Exploratory factor analysis elicited 7 factors which generally explained 71% of the total variance. Furthermore Confirmatory factor analysis results on the items showed that all the items have significant load factor on their own factors, and the seven-factor structure of the questionnaire was approved. Concurrent criterion validity coefficient between each of the items of Course Experience Scale With the criterion variables (the Course Satisfaction and Achievement) that were linked theoretically was also analyzed and approved. Altogether, findings of this study showed that scale of the Course Experience Questionnaire (CEQ) has adequate reliability.
Extended Abstract
Introduction
One of the most important concerns of all university systems is designing educational courses effective for students. Promoting the curriculum's quality and improving the educational program's quality is essential to attaining a high-quality education (Thien & Jamil, 2019). It is also crucial to student satisfaction (Griffioen, Doppenberg, & Oostdam, 2018). Nowadays there is a growing tendency to investigate students' satisfaction with the quality of educational courses (Yin, Gonzalez, & Huang, 2018). The significance of this issue has made the universities aware of the effectiveness of their programs and courses, and prioritizing the concept of evaluating their educational course has never been greater. The evaluation is exceedingly essential in that it measures the effectiveness of the curriculum elements and, at a greater level, the entire curriculum. (Wolstencroft & Main, 2021). Given these points, the present study tries to localize the experience scale of an educational course (Wilson et al., 1997) as an evaluation tool of the educational course done by its students.
Theoretical Framework
Evaluation or assessment refers to a neatly structured process for collecting and interpreting information that determines the intended goals and scale of the program (Seif, 2022). Evaluation can be used in countless ways, including determining the value, competency, importance, and esteem of an educational phenomenon for judging and making decisions in order to establish a system (program), continuity of activities, adjusting the system, validating the system, understanding the different dimensions of the system as well as supporting it (Bazargan, 2021). In this regard, institutes of higher education also know the value of evaluation and try to use it in the best way possible to improve the quality of their programs.
Evaluation in the university is done in different ways, such as evaluation by students, self-evaluation, and evaluation by the head of the department, colleagues, and school officials. One of the most common evaluation methods in most countries, including Iran, is an evaluation done by students. Students understand the curriculum through evaluation (Wolstencroft & Main, 2021). In other words, the assessment causes the higher education curriculum to be defined and reviewed from the student's point of view (Ramsden, 2003). It also significantly impacts their learning (Biggs, 2003). It leads to students' satisfaction with the educational course (Griffioen, Doppenberg, & Oostdam, 2018). Students are present in the teacher's classroom during one academic semester, so they get to know all of his capabilities and mannerisms. On the other hand, they can recognize better than any other person whether the professor has been able to directly or indirectly transfer the course material to them (Nasr et al., 2004). Therefore, student evaluation of the provided educational courses is the most influential factor in determining the effectiveness of these courses.
Most of the questionnaires used so far have focused more on evaluating students as a whole, rather than individual lecturers of courses. As mentioned by Marsh (1987), although these types of evaluations provide significant and helpful information such as feedback to faculty members and contrubuting the employee's decision-making, and are valuable for evaluating teaching, from the point of view of a higher education institute that tries to maintain and improve the quality of education, it is the far more appropriate to focus on the evaluation of the entire course (Richardson, 1994).
Achieving a standard for measuring the effectiveness of training courses has been accompanied by many efforts by those involved in higher education. The university managers are trying to improve their institution's performance based on the performance indicators. They consider these indicators as the guarantor of the quality of teaching in different fields and its achievement. Many efforts have been made in the field of compiling performance indicators. In this regard, the Educational Course Experience Questionnaire (CEQ) was developed as an authentic evaluation tool to meet students' needs for educational courses and implemented as a national test in universities in Australia about four decades ago. Because of the critical role of student's perception in evaluating the quality of the educational courses and the limitation of evaluating the country's most conducted research, which only focused on the professors' lesson presentation (Vakili et al., 2010), this current study tries to localize the Iranian student population's educational course experience scale (Wilson et al., 1997)
Methodology
The current research is the normative research field carried out in the first semester of the academic year 2021-2022 at Guilan University. The statistical population of this research included all students entering the academic years of 2017 and 2018, constituting a population of 3830 people. Based on Morgan's table, 350 people were identified as a sufficient sample size for this statistical population. With multiple follow-ups, 332 copies (94%) of the questionnaires were completed through sampling, which was randomly collected. The present research used the Educational Course Experience Questionnaire (Wilson et al., 1997). In this research, with the addition of the content appropriateness scale with four items, the educational course experience questionnaire had 43 items. Also, an additional item about overall satisfaction is included as a separate item along with the educational course experience questionnaire. The academic progress variable has also been measured using students' GPAs.
Discussion and results
In order to extract scale factors, the exploratory factor analysis method was used. The results of exploratory factor analysis with principal components method and final orthogonal rotation extracted seven factors that explain 71% of the total variance. The analyses performed on the fit indices have all been accepted for the model mentioned. All the values of the indicators are close to the suitability criteria and indicate the structural model's satisfactory and relatively acceptable adequacy. As a result, the 7-factor structure of the educational course experience questionnaire and the confirmatory factor analysis model have an acceptable fit and are approved. CFI (consistent fit index), NFI (normalized fit index), TLI (Tucker-Lewis's index), and IFI (incremental fit index) all indicate the acceptability of the model, and show the values greater than 0.90 which indicate the favorability of the model. The root means of the square error of estimation (RMSEA) index, which represents parsimonious fit, also shows a value of 0.050, indicating the fitting model's appropriateness. Finally, the GFI (Goodness of Fit Index), which represents the absolute fit, shows a value of 0.95 and indicates that the collected data is close to the acceptable range and that the developed model is confirmed.
The data shows that Cronbach's alpha coefficient of the educational course experience scale is 0.95 and satisfactory. In addition, the correlation of the mentioned scale items with the total score is significant (P=0.001), and the range of these coefficients varies from 0.51 to 0.83. In other words, all items of the experience scale of the educational course have the necessary homogeneity. Pearson's correlation coefficient results showed a high correlation between the seven subscales of appropriate teaching, clear goals, appropriate assessment, appropriate assignments, emphasis on independence, general skills, and appropriateness of content with the overall scale (training course experience) (P=0.001). In addition, the correlation coefficient between the educational course experience scale and all its components with the overall satisfaction scale of the course is significant (P=0.001). The correlation coefficient between the training course experience and its components with academic progress is also significant (p=0.001). These results indicate the concurrent criterion validity of the research results, in which the educational course experience scale and its components were correlated with the scales theoretically related to it (satisfaction with the course and academic progress.)
Conclusion
According to the findings of this research, the high validity and reliability of the educational course experience scale used in the current study can lead to the use of this scale as a valid and reliable tool to measure the quality of educational courses at the country's higher education level. This scale can also have many applications in teaching and learning research in higher education. The mentioned tool also reveals the appropriate performance indicators of university teaching quality so that it can be used to align the teaching quality evaluation of university professors with the global teaching quality evaluation standards. This tool can recruit and promote faculty members and their professional growth. In this regard, it is possible to improve their teaching competence by holding workshops and training courses for academic staff members.

کلیدواژه‌ها [English]

  • Student Evaluation
  • Reliability
  • Course Experience
  • Validity
  • Factor Structure
  • Indicators
Ali, H., & Dodeen, H. M. (2020): An adaptation of the course experience questionnaire to the Arab learning context, Assessment & Evaluation in Higher Education, 1-12, DOI: 10.1080/02602938.2020.1841733.
Asonitou, S., Mandilas, A., Chytis, E., & Latsou, D. (2018) A Greek Evaluation of the Course Experience Questionnaire: Students' Conceptions of the Teaching Quality of Higher Education Accounting Studies. International Journal of Business & Economic Sciences Applied Research, 11(2), 51-62.
Bazargan, A. (2021). Educational evaluation, 19th edition, Tehran: Samt Publications. (in persuan).
Biggs, J. (1999). Teaching for quality learning at university (Camberwell, Victoria, ACER).
Biggs, J. (2003). Teaching for quality learning at university. 2nd ed. Buckingham: Open University Press.
Brown, J. L., Hammer, S. J., Perera, H. N., McIlveen, P. (2022). Relations between graduates’ learning experiences and employment outcomes: a cautionary note for institutional performance indicators, International Journal for Educational and Vocational Guidance, 22: 137-156, https://doi.org/10.1007/s10775-021-09477-0.
Buntat, Y., jabor, M. M., Sukri Saud, M., Syed Mansor, S. M. S., & Mustaffa, N. H. (2013). Employability skills element’s: Difference perspective between teaching staff and employers industrial in Malaysia procedia, Social and Behavioral Sciences, 93, 1531 – 1535.
Byrne, M. & Flood, B. (2003) Assessing the teaching quality of accounting programmes: An evaluation of the Course Experience Questionnaire, Assessment and Evaluation in Higher Education, 28(2): 135–145.
Carless, D., & D. Boud. 2018. The development of student feedback literacy: Enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315–1325. https://doi.org/10.1080/02602938.2018.1463354.
Dawson, P., M. Henderson, P. Mahoney, M. Phillips, T. Ryan, D. Boud, & et al. 2019. What makes for effective feedback: Staff and student perspectives. Assessment & Evaluation in Higher Education, 44(1), 25–36. https://doi.org/10.1080/02602938.2018.1467877.
Espeland, V. & Indrehus, O. (2003) Evaluation of students’ satisfaction with nursing education in Norway, Journal of Advanced Nursing, 42(3), 226–236.
Faranda, W. T., Clarke, T. B., & Clarke III, i. (2021). Marketing Student Perceptions of Academic Program Quality and Relationships to Surface, Deep, and Strategic Learning Approaches, Journal of Marketing Education, 43(1) 9–24.
Forsythe, A., & S. Johnson. 2017. Thanks, but no-thanks for the feedback. Assessment & Evaluation in Higher Education, 42(6), 850–859. https://doi.org/10.1080/02602938.2016.1202190.
Griffioen, D.M.E., Doppenberg, J.J., & Oostdam, R.J. (2018). Are more able students in higher education less easy to satisfy? Higher Education, 75(5), 891–907. doi:10.1007/ s10734-017-0176-3.
Heydar, F. T. (2021). The Applicability of the Course Experience Questionnaire in Accounting Education in Saudi Arabia, Journal of Accounting, Finance and Auditing Studies, 7/3 (2021): 184-207.
Huybers, T. (2017). Exploring the use of best-worst scaling to elicit course experience questionnaire responses. Assessment & Evaluation in Higher Education, 42(8), 1306–1318.
Kulik J. A. (2001). Student ratings: Validity, utility and controversy. New Directions for Institutional Research, 109:9-25.
Liu, J.C.; St. John, K.; & Courtier, A.M.B. (2017) Development and Validation of an Assessment Instrument for Course Experience in a General Education Integrated Science Course. Journal of Geoscience Education, 65(4), 435–454. DOI: 10.5408/16-204.1.
Mahmoudi Sahebi, M., Nasri, S., and Gholinia Qolzam, H. (2012). Identifying the evaluation criteria of professors' teaching performance with an emphasis on engineering education, Journal of Educational Technology, 7(4): 315-324. (in persuan).
marais, D., & Perkins, J. (2012). Enhancing employability through self- assasment. Procedia - Social and Behavioral Sciences, 46: 4356 – 4362.
Marsh, H. W. (1987). Students’ evaluations of university teaching: Research findings, methodological
 issues, and directions for future research, International Journal of Educational Research, 11: 253–388.
Moezi, M., Shirzad, H., Zamanzadeh, B., and Rouhi, H. (2010). The views of faculty members and students on the evaluation of professors and the effective criteria in professor teaching in Shahr Kurd University of Medical Sciences, Journal of Shahr Kurd University of Medical Sciences, 4(11): 63-75. (in persuan).
Molloy, E., D. Boud, & M. Henderson. 2019. Developing a learning-centred framework for feedback literacy. Assessment & Evaluation in Higher Education. https://doi.org/10.1080/02602938.2019.1667955.
Nasr, A. R., Sharif, M., and Parhehi, H. R. (2004). Teaching evaluation, Tehran: Ministry of Science, Research and Technology. (in persuan).
Postareff, L., Mattsson, M., Lindblom-Ylänne, S., & Hailikari, T. (2017). The complex relationship between emotions, approaches to learning, study success and study progress during the transition to university. Higher Education, 73(3), 441–457. https://doi.org/10.1007/s10734-016-0096-7.
Price, L., Richardson, J. T. E., Robinon, B., Ding, Xia., Sun, X. & Han, C. (2011). Approaches to studying and perceptions of the academic environment among university students in China. Asia Pacific Journal of Education, 31(2): 159-175.
Prosser, M. & Trigwell, K. (1999). Understanding learning and teaching: the experience in higher
 Education. Buckingham: Open University Press.
QILT (2020). Quality indicators for learning and teaching. https://www.qilt.edu.au/.
Ramsden, p. (1984). The context of learning, in: F. Marton et al. (Eds) The experirnce of learning, pp. 124-143. Edinburgh: Scottish Academic Press.
Ramsden, P. (1987). Improving teaching and learning in higher education: The case for a relational perspective.tStudies in higher education, 12(3): 275-286.
Ramsden, P. (1991). A performance indicator of teaching quality in higher education: The course
 experience questionnaire, Studies in Higher Education, 16 (2): 129–150.
Ramsden, P. (2003). Learning to teach in higher education (2nd edn). London: Routledge.
Richardson, J. T. E. (1994). A British evaluation of the course experience questionnaire, Studies in Higher Education, 19 (1): 59–68.
Richardson, J. T. E. (2005). Students’ perceptions of academic quality and approaches to studying in distance education, British Educational Research Journal, 31(1), 7–27.
Saif, A. A. (2022). Educational measurement, assessment and evaluation, Tehran: Duran Publishing. (in persuan).
Saputra, E., Handrianto, C., Pernantah, P. S., Ismaniar, I., & Shidiq, G. A. (2021). An evaluation of the Course Experience Questionnaire in a Malaysian context for quality improvement in teaching and learning,  Journal Of Research, Policy & Practice Of Teachers & Teacher Education (JRPPTTE), 11(1): 1-12.  https://doi.org/10.37134/jrpptte.vol11.1.1.2021
Shakurnia, A. H., Malairi, A. R.., Tarabpour, M., and Elhampour, H. (2006). Correlation between teacher's evaluation score and student's GPA, Iranian Journal of Education in Medical Sciences, 6(1): 51-58. (in persuan).
Stringer, M. & Finlay, C. (1993). Assuring quality through student evaluation, in: C. ELLIS (Ed.) Quality Assurance for University Teaching, Buckingham: SRHE/Open University Press.
Thien, L. M., & Jamil, H. (2019): Students as ‘Customers’: unmasking course experience and satisfaction of undergraduate students at a Malaysian Research University, Journal of Higher Education Policy and Management, DOI: 10.1080/1360080X.2019.1660045.
Trigwell, K. & Prosser, M. (1991). Improving the quality of student learning: The influence of learningcontext and student approaches to learning on learning outcomes, Higher Education, 22(3): 251–266.
Ullah, R., Richardson, J. T. E. & Hafez M. (2011). Approaches to studying and perceptions of the academic environment among university students in Pakistan, Compare, 41(1): 113-127.
Vakili, A.; Haji Aghajani, S., Rashidipour, A., and Ghorbani, R. (2010). Investigating the factors influencing the evaluation of professors from the students' point of view: a comprehensive study in Semnan University of Medical Sciences, Komesh, 2(38): 93-104. (in persuan).
Wilson, K. L., Lizzio, A. & Ramsden, P. (1997). The development, validation and application of the course experience questionnaire, Studies in Higher Education, 22 (1): 33–53.
Wolstencroft, P., & L d. Main. 2021. ‘Why didn’t you tell me that before?’ Engaging undergraduate students in feedback and feedforward within UK higher education. Journal of Further and Higher Education 45(3): 312-323.
Yin, H., González, C., & Huang, S. (2018). Undergraduate students’ approaches to studying and perceptions of learning context: A comparison between China and Chile. Higher Education Research and Development, 37(7), 1530–1544. doi:10.1080/07294360.2018.1494142.
Yin, H., Lu, G., & Meng, X. (2022). Online course experiences matter: adapting and applying the CEQ to the online teaching context during COVID-19 in China, Assessment & Evaluation in Higher Education , 45(2): 121-142. https://doi.org/10.1080/02602938.2022.2030671,
Zhang, P., Lu, G., & Cheng, W. (2006). Impact of teaching situations on undergraduates’ approaches
 to teaching. Research in Teaching, 19: 301–305.