進階搜尋


   電子論文尚未授權公開,紙本請查館藏目錄
(※如查詢不到或館藏狀況顯示「閉架不公開」,表示該本論文不在書庫,無法取用。)
系統識別號 U0026-0109201500550400
論文名稱(中文) 以開放評價信任模型建構磨課師之同儕互評機制
論文名稱(英文) Using Open Assessment Trust Model to Build Peer Assessment in MOOCs
校院名稱 成功大學
系所名稱(中) 資訊管理研究所
系所名稱(英) Institute of Information Management
學年度 103
學期 2
出版年 104
研究生(中文) 黃富彬
研究生(英文) Fu-Bin Huang
學號 R76024116
學位類別 碩士
語文別 中文
論文頁數 95頁
口試委員 指導教授-呂執中
口試委員-陸偉明
口試委員-傅新彬
口試委員-陳平舜
中文關鍵字 MOOCs  同儕互評  信任模型  開放評價 
英文關鍵字 MOOCs  Peer Assessment  Trust Model  Open Assessment 
學科別分類
中文摘要 同儕互評提供了反思學習的機會,為「大規模網路公開課程(massive open online courses, MOOCs)」的一個重要功能,實現了全球教室的可能性。然而,在開放的環境下,同儕互評雖能帶來不錯的學習幫助,但評比信任度卻仍是個有待改善的問題。本研究的主要目的為:(1)驗證同儕互評機制的有用性與可信度;(2)探討一個多層次同儕互評的發展可能。
而針對互評機制的設計,本研究提出了「開放評價信任模型 (Open Assessment Trust Model, OATM)」。OATM首先使用「評比準則學習模型(Evaluating Rubrics Learning Model, ERLM)」來解決同儕回饋品質不佳問題,進而提升學習者於課程的學習效益;接著,再使用「開放評價同儕互評(Open Assessment Peer Assessment, OAPA)」實現第三方觀摩者的導入策略,建構出透明可信賴的評比環境。以大學部專業必修課78名學生與通識教育選修課62名學生為對象,在「小規模私人性線上課程(small private online courses, SPOCs)」環境下進行實驗。
經由問卷調查與變異數分析,本研究發現:(1)同儕互評是可用且能信任的,特別是在選修班級中。另外,在身份揭露類型的選擇與評比人數的安排方面,本研究也建議在必修課中應使用實名制,選修課則不必考慮此因素,評比人數則至少要6位;(2)使用第三方觀摩者進行多層互評的效果有限,若考慮多次評比的耗時費力,不建議採用。OATM提出之互評前的學習機制,若使用在單層互評上,則可考慮為未來研究方向。
英文摘要 Peer assessment offers an opportunity to reflect on learning and has the important function of supporting massive open online courses (MOOCs), thereby realizing the possibility of global classrooms. However, in an open environment, although peer assessment can help with learning, assessment trust is still a problem that needs to be improved. Therefore, the main purpose of this study is (1) to verify the usefulness and trustworthiness of peer assessment, and (2) to discuss the development possibilities of multiple-tier peer assessment. For peer assessment mechanism design, we proposed an open assessment trust model (OATM). First, an OATM uses the evaluating rubrics learning model (ERLM) to solve the question of poor quality peer feedback, thereby enhancing the learning efficiency of learners in a course. Subsequently, the OATM uses an open assessment peer assessment (OAPA) to achieve a third-party observation strategy, thereby creating a transparent and trustworthy assessment environment. We selected 78 university students from professional required courses and 62 university students from general education elective courses as experimental subjects, and experimented in small private online course (SPOCs) environments. By questionnaire and ANOVA analysis, we found that (1) peer assessment is useful and trustworthy, especially in elective courses. In addition, regarding the aspect of identity mode choice and the number arrangement of raters, we suggested that required courses should use real-names, yet elective courses do not have to consider this factor, and the number of raters should be at least 6. (2) The effects of using a third-party viewer to make a multiple-tier peer assessment were limited. Taking into account the amount of time and spirit invested, we do not suggest using an OATM. If the OATM’s learning mechanisms before assessment are used in a single tier peer assessment, this may be used as a future direction of this study.
論文目次 中文摘要 ii
英文摘要 iii
致謝 ix
目錄 x
表目錄 xii
圖目錄 xiii
第一章 緒論 1
1.1研究背景與動機 1
1.2研究目的 2
1.3研究範圍與限制 3
1.4研究流程與架構 4
第二章 文獻探討 6
2.1何謂MOOCs 6
2.1.1 MOOCs的歷史介紹 6
2.1.2 MOOCs具備的學習特色 9
2.1.3 MOOCs的未來發展與挑戰 10
2.2同儕互評 12
2.2.1 MOOCs的兩大評比機制 12
2.2.2同儕互評的基本活動架構 15
2.2.3同儕互評在MOOCs實作的涉及議題 18
2.2.4各平台之互評做法比較 22
2.3學習成效 23
2.3.1形成性與總結性評比 23
2.3.2學習者的識別身份 24
2.3.3學習者的回饋行為 26
2.4信任管理 28
2.4.1同儕互評的誤差類型 28
2.4.2同儕互評的信任模型 30
2.4.3互惠型的同儕互評做法 32
第三章 機制設計 34
3.1研究模型 34
3.1.1開放評價信任模型 (OATM=ERLM+OAPA) 34
3.1.2評比準則學習模型 (ERLM) 37
3.1.3開放評價同儕互評 (OAPA) 39
3.2機制架構 42
3.2.1 OATM運作流程 43
3.2.2 OATM時程規劃 46
3.2.3評比準則學習模型實作方法 47
3.2.4開放評價同儕互評實作方法 52
第四章 上線測試 60
4.1實驗設計 60
4.2結果分析 61
4.2.1同儕互評的有用性與可信度 61
4.2.2多層次同儕互評作法的發展可能 66
4.2.3小結 71
第五章 結論與未來研究方向 74
5.1結論 74
5.2未來研究方向 75
第六章 參考文獻 77
附錄一 系統展示 80
附錄二 調查問卷 94
參考文獻 書籍
Drexler, A., & Chafee, R. (1997). The architecture of the ecole des beaux-arts: MIT Press.
Feldman, E. B. (1994). Practical art criticism: Prentice Hall.
會議文章
Fujihara, Y. (2013). Toward dependable online peer assessments a concept of the trust manegement on peer assessments. Paper presented at the IEEE Region 10 Humanitarian Technology Conference, Sendai.
Hamer, J., Ma, K. T. K., & Kwong, H. H. F. (2005). A method of automatic grade calibration in peer assessment. Paper presented at the ACE '05 Proceedings of the 7th Australasian Conference on Computing Education, Newcastle.
Millard, D. E., Fill, K., Gilbert, L., Howard, Y., Sinclair, P., Senbanjo, D. O., & Wills, G. B. (2007). Towards a canonical view of peer assessment. Paper presented at the IEEE International Conference on Advanced Learning Technologies (7th), Niigata.
Piech, C., Huang, J., Chen, Z., Do, C. B., Ng, A. Y., & Koller, D. (2013). Tuned models of peer assessment in MOOCs. Paper presented at the Educational Data Mining, Memphis.
Renz, J., Staubitz, T., Willems, C., Klement, H., & Meinel, C. (2014). Handling re-grading of automatically graded assignments in MOOCs. Paper presented at the IEEE Global Engineering Education Conference, Istanbul.
電子文章
Waard. (2013). MOOCs Guide. http://MOOCguide.wikispaces.com/0.+Home+Intro+to+MOOC
康永華. (2012). MOOC─全球學習新趨勢. http://www.sun.net.tw/knowledge/knowledge_detail.php?knowledge_id=21
劉怡甫. (2013). 從anti-MOOC風潮談MOOCs轉型與SPOCs擅揚. http://epaper.heeact.edu.tw/archive/2014/03/01/6132.aspx
期刊文章
Chang, C. C., Tseng, K. H., & Lou, S. J. (2012). A comparative analysis of the consistency and difference among teacher-assessment, student self-assessment and peer-assessment in a web-based portfolio assessment environment for high school students. Computers & Education, 58(1), 303-320.
Cho, K., & MacArthur, C. (2010). Student revision with peer and expert reviewing. Learning and Instruction, 20(1), 328-338.
Dolan, V. L. B. (2014). Massive online obsessive compulsion: What are they saying out there about the latest phenomenon in higher education? International Review of Research in Open and Distance Learning, 15(2), 268-281.
Dow, S. P., Glassco, A., Kass, J., Schwarz, M., Schwartz, D. L., & Klemmer, S. R. (2010). Parallel prototyping leads to better design results, more divergence, and increased self-efficacy. ACM Transactions on Computer-Human Interaction, 17(4), 1-24.
Estevez-Ayres, I., Crespo-Garcıa, R. M., Fisteus, J. A., & Kloos, C. D. (2013). An algorithm for peer review matching in massive courses for minimising students' frustration. Journal of Universal Computer Science, 19(15), 2173-2197.
Fox, A. (2013). From MOOCs to SPOCs. Communications of the ACM, 56(12), 38-40.
Hovardas, T., Tsivitanidou, O. E., & Zacharia, Z. C. (2014). Peer versus expert feedback: An investigation of the quality of peer feedback among secondary school students. Computers & Education, 71, 133-152.
Jordan, K. (2014). Initial trends in enrolment and completion of massive open online courses. International Review of Research in Open and Distance Learning, 15(1), 134-160.
Kadous, K., Leiby, J., & Peecher, M. E. (2013). How do auditors weight informal contrary advice? Accounting Review, 88(6), 2061-2087.
Kulkarni, C., Wei, K. P., Le, H., Chia, D., Papadopoulos, K., Cheng, J., & Klemmer, S. R. (2013). Peer and self assessment in massive online classes. ACM Transactions on Computer-Human Interaction, 20(6), 1-32.
Liyanagunawardena, T. R., Adams, A. A., & Williams, S. A. (2013). MOOCs: A systematic study of the published literature 2008-2012. International Review of Research in Open and Distance Learning, 14(3), 202-227.
Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and selfregulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218.
Nilson, L. B. (2003). Improving student peer feedback. College Teaching, 51(1), 34-38.
Sadler, P. M., & Good, E. (2006). The impact of self- and peer-grading on student learning. Educational Assessment, 11(1), 1-31.
Sandeen, C. (2013). Assessments place in the new MOOC world. Research & Practice in Assessment, 8(1), 5-12.
Stephen, P., & Balfour, P. D. (2013). Assessing writing in MOOCs: Automated essay scoring and calibrated peer review. Research & Practice in Assessment, 8(1), 40-48.
Suen, H. K. (2014). Peer assessment for massive open online courses. International Review of Research in Open and Distance Learning, 15(3), 312-327.
Tinapple, D., Olson, L., & Sadauskas, J. (2013). CritViz: Web-based software supporting peer critique in large creative classrooms. IEEE Technical Committee on Learning Technology, 15(1), 29-35.
Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249-276.
Wing, J. M., & Chi, E. H. (2011). Reviewing peer review. Communications of the ACM, 54(7), 10-11.
Yu, F. Y., & Wu, C. P. (2011). Different identity revelation modes in an online peer-assessment learning environment: Effects on perceptions toward assessors, classroom climate and learning activities. Computers & Education, 57(3), 2167-2177.
Yu, F. Y., & Wu, C. P. (2013). Predictive effects of online peer feedback types on performance quality. Educational Technology & Society, 16(1), 332-341.
Zimmerman, B. J., & Schunk, D. H. (2001). Reflections on theories of self-regulated learning and academic achievement. Self-Regulated Learning and Academic Achievement: Theoretical Perspectives, 2(1), 289-307.
線上資料庫
Bostock, S. (2001). Student peer assessment. from Higher Education Academy Article http://www.reading.ac.uk/web/FILES/engageinassessment/Student_peer_assessment_-_Stephen_Bostock.pdf
Kolowich, S. (2013). The professors who make the MOOCs. from The Chronicle of Higher Education http://chronicle.com/article/The-Professors-Behind-the-MOOC/137905/#id=overview
影音資源
Agarwal, A. (Producer). (2013). Why massive open online courses (still) matter. Retrieved from http://www.ted.com/talks/anant_agarwal_why_massively_open_online_courses_still_matter
論文全文使用權限
  • 同意授權校內瀏覽/列印電子全文服務,於2020-09-02起公開。


  • 如您有疑問,請聯絡圖書館
    聯絡電話:(06)2757575#65773
    聯絡E-mail:etds@email.ncku.edu.tw