進階搜尋


   電子論文尚未授權公開,紙本請查館藏目錄
(※如查詢不到或館藏狀況顯示「閉架不公開」,表示該本論文不在書庫,無法取用。)
系統識別號 U0026-2001201700443300
論文名稱(中文) 檢測理工科研究生專業口說能力之電腦化口說測驗的設計
論文名稱(英文) Design of Computer-assisted Speaking Tests for Engineering Majors in Higher Education
校院名稱 成功大學
系所名稱(中) 外國語文學系
系所名稱(英) Department of Foreign Languages & Literature
學年度 105
學期 1
出版年 106
研究生(中文) 郭俊葳
研究生(英文) Chun-Wei Kuo
學號 k26021099
學位類別 碩士
語文別 英文
論文頁數 122頁
口試委員 指導教授-高實玫
口試委員-鄒文莉
口試委員-陳慧琴
中文關鍵字 英文授課  專業學術英文課程  電腦化測驗  口說測驗  誘發模仿  朗讀 
英文關鍵字 English as a medium of instruction  English for science and engineering  computer-assisted test  oral proficiency test  elicited imitation  read-aloud 
學科別分類
中文摘要 隨著全球化的趨勢,英文成為高等教育的主要授課語言之一。因應專業領域的英語授課(English Medium Instruction)需求,大專院校開設專業學術英文課程(English for Specific Academic Purposes)幫助學生具備進入專業領域以英文學習的能力。因此,檢測學生專業領域之英語語言能力也引起許多研究者的興趣。然而,現行的標準化語言測驗並無針對專業領域設計的口說測驗,且人工閱卷方式往往需花費許多人力及時間。由此,本研究目的在於設計電腦化專業領域口說測驗,結合誘發模仿(elicited imitation)及朗讀(read-aloud)兩種題型,協助授課老師以電腦輔助測驗(computer-assisted test)所具備的自動語音辨識系統(automatic recognition system)即時評量學生口說表現。誘發模仿測驗內容包含數學用語讀法及理工科教科書常用之專業學術詞彙,而朗讀測驗包含相關理工文章。此外,以財團法人語言訓練中心(LTTC)設計之中高級標準化一般口說測驗、專業聽力測驗及專業閱讀測驗為對照,探討電腦化口說測驗評分結果是否具備信度及效度。
本研究實驗對象為南台灣某國立大學研究所28位理工領域學生。在為期六週的暑期專業英文課程中,受試者分別於課程前及後接受上述測驗,並於兩次測驗結束時,填寫線上問卷提供反饋。研究者根據受試者的第一次測驗回饋,修正初版電腦化口說測驗中的誘發模仿測驗語速及朗讀測驗句子長短。另外,在第二次測驗後,研究者與4名受試者進行面談,更進一步了解其受測感想。本研究的主要發現如下:
1.相較於初始版本,修正版的誘發模仿測驗及朗讀測驗與專業聽力測驗及專業閱讀測驗具有更顯著的相關,然而卻與一般LTTC的口說測驗相關性較不顯著。
2.受試者在密集課程後於誘發模仿測驗及專業閱讀測驗的成績有顯著的進步,顯示這兩種題型較能檢測工程領域專業英語能力。
3.在第一階段,電腦化測驗的四項評分(亦即發音、語調、音量及流利度)與人工評分的有顯著相關性。但是兩種評分在第二次測驗的相關性較低。
4.在問卷中,受試者表示誘發模仿測驗與朗讀測驗的單字及內容在第二階段難度較低。並且他們普遍保持肯定態度對於兩項測驗能偵測其專業口說能力及專業英文單字量。然而受試者亦表明測驗內容對於研究生專業性仍不足。
本研究結果證實電腦化口說測驗,尤其是誘發模仿測驗,較一般口說測驗更能檢測理工學生的專業英文能力能夠檢測理工學生的專業英文能力。針對修正版電腦化口說測驗與一般口說測驗相關性降低的問題,本研究發現原因是一般口說測驗之測驗內容並非針對專業領域設計,因此在受測者接受密集課程後較無法評量其專業領域英語能力;然而電腦化口說測驗能反映出學生接受課程後專業英文能力的成長,因此兩測驗在第二階段之各項相關性減低,是作為快速篩檢學生的可信評量工具。最後,經過訪談結果發現,測驗內容似乎仍無法完全符合理工研究生對專業領域英語檢測的要求。本研究的結果可供語言測驗設計者及未來相關研究作參考。
英文摘要 With the trend of globalization, English has been a primary medium of instruction (EMI) in higher education. To address the needs from the EMI courses, an increasing number of universities have offered courses of English for specific academic purposes (ESAP) to equip their students with the required language proficiency. Recently, evaluating students’ English proficiency in designated fields has aroused interests among test designers. However, no oral proficiency test for specific purposes is available currently. In addition, the human-scoring method in standardized oral evaluation inevitably consumes much time and cost. Therefore, this study aims to address these two concerns with computer-assisted speaking tests for specific academic purposes with two testing formats: elicited imitation (EI) and read-aloud (RA). The EI test contains contents related to mathematical terms, and discipline-specific words frequently used in textbooks for science and engineering majors, while the RA test contains reading passages related to technology. To verify the validity and reliability of the computer-assisted tests, the scores were statistically compared with those obtained from high-intermediate level general purpose English Proficiency Speaking Test (GEPT-S), English for Specific Academic Purposes Listening Test (ESAP-L) and English for Specific Academic Purposes Reading Test (ESAP-R) designed by LTTC.
The participants included 28 graduate students from the School of Science and Engineering at a university in southern Taiwan. The students received the five tests mentioned above for two times, before and after a six-week ESAP course for engineering majors. After the first time of evaluation, they completed an on-line survey. According to their responses, the researcher revised the initial version of the computer-assisted tests, in terms of the speech rate of the EI and the sentence length of the RA. The same survey was conducted once more after the second time of evaluation. Besides, four students voluntarily participated in the semi-structured interview after the second time of evaluation to share their test-taking experiences. The main findings of the study are summarized as follows:
1.Compared with the initial version, the revised EI and RA were more correlated with the ESAP-L and ESAP-R, while they were less correlated with the GEPT-S.
2.Significant progresses were found in the students’ performances in the EI and ESAP-R tests after the intensive ESAP course.
3.In the first time, the four categorical scores (i.e., pronunciation, timing, pitch, and emphasis) in the computer-assisted tests were correlated with the manual scores in the GEPT-S. However, a decrease of correlation was found between the two scoring methods in the second time.
4.The survey results indicate that the content and vocabulary of the EI and RA in the second time were more adequate for science students. Besides, although they were generally positive about the predictability of the computer-assisted tests on their oral proficiency and vocabulary in science domains, they indicated that the content was still not specific enough for graduate students.
The findings reveal that students’ professional oral proficiency could not be detected through the GEPT-S. Furthermore, compared with the RA, the EI could more successfully evaluate science and engineering majors’ development in professional English abilities. Last, based on the participants’ responses, the content of the computer-assisted tests seemed to be not specific enough for graduate students. The findings of the study provide implications and suggestions for test designers, ESP practitioners and researchers.
論文目次 ABSTRACT (Chinese)…………………………………………………………………...…i
ABSTRACT (English)……………………………………………………………………..iii
ACKNOWLEDGEMENTS………………………………………………………………...v
TABLE OF CONTENTS…………………………………………………………………..vi
LIST OF TABLES………………………………………………………………………….x
LIST OF FIGURES………………………………………………………………………...xi
CHAPTER ONE INTRODUCTION……………………………………………………..1
Background and Motivation……………………………………………………...……1
The Purpose of the Study……………………………………………………………...5
Research Question………………………………………………………...…………...5
Definitions of Terms………….……..…………………...…………………………….6
CHAPTER TWO LITERATURE REVIEW……………………………………………...9
EMI and ESP in Higher Education…………………………………………………….9
Contributions and Threats of EMI………………………………………………..9
ESP Development……………………………………………………………….13
Academic benefits of ESP…………………………………………………13
EAP and ESAP in tertiary education…………….………………………...14
Necessity to construct ESP tests…………………………………………...17
Automatic Speech Recognition………………………………………………………19
Elicited Imitation and Read-Aloud Assessment……………………………………...22
Elicited Imitation as Language Assessment…………………………………….22
Read-aloud as Language Assessment…………………………………………...26
The Establishment of Specialized Wordlist…………………………………………..28
CHAPTER THREE METHODOLOGY……………………………………………...30
Research Design…...………………………………………………………………....30
Participants…………………………………………………………………………...32
Research Procedures………………………………………………………………….34
Instruments………………………………………………………………………...…37
Design of the High-intermediate GEPT-S, ESAP-L and ESAP-R………...…....38
Design of the EI………………………………………………………………....39
Textbook selection…………………………………………………………39
Corpus compilation and wordlist generation………………………………40
Design of the test items……………………………………………………40
Design of the RA………………………………………………………………..43
Design of the Online Survey……..……………………………………………..43
Design of the Semi-structured Interview…………………..……………………44
Scoring……………………………………………………………………………......44
The Scoring of the GEPT-S and ESAP-L/R…………………………………….44
The Scoring of the EI and RA…………...……………………………………...45
The Scoring of the Survey………...…………...……………………………….46
Test Administration Procedures…… …..…………………………………………….46
The Procedure of the ESAP-L and ESAP-R………...………………………….47
The Procedure of the GEPT-S…………..………………………………………48
The Procedure of the RA and EI…………...…………………………………...48
Data Analysis………………………………...……………………………………….50
CHAPTER FOUR RESULTS AND DISCUSSION………………………………...53
Results of Research Question 1: Can the EI and RA Validly Predict the Students’ Professional English Proficiency? ...............................................................................53
Homogeneity between the Two Versions of the EI and RA…………………….53
Validities of the EI and RA……………...………………………...…….……...56
Results of Research Question 2: Can the EI and the RA Reflect the Students’ Performance on Domain-specific Learning in Engineering/Science?.........................60
Results of Research Question 3: Can the Categorical Scoring Procedures of the Computer-assisted Tests Reliably Evaluate Students’ Speaking Performance? ..........62
Results of Research Question 4: How Do the Students Perceive the Two Computer-assisted Tests? ................................................………………………........ 65
Test Design………………………………………………………………….......65
Sentence length………………………………………………………….…65
Speech rate………………………………………………………………...66
Lexical difficulty…………………………………………………………..67
Content Difficulty………………………………………………………….70
Students’ Perception toward Predictability……………………………………..71
Student’s Test-taking Experience……………………………………………….74
Discussion…………………………………………………………………………....77
CHAPTER FIVE CONCLUSION…………………………………………………86
Summary of the Findings…………………………………………………………….86
Implications…………………………………………………………………………..88
Limitations……………………………………………………………………………90
Suggestions for Future Research……………………………………………………..91
REFERENCES…………………………………………………………………………….93
APPENDICES
Appendix A………………………………………………………………………….106
Appendix B………………………………………………………………………….107
Appendix C………………………………………………………………………….108
Appendix D…………………………………………………………………………109
Appendix E………………………………………………………………………….110
Appendix F………………………………………………………………………….114
Appendix G…………………………………………………………………………116
Appendix H…………………………………………………………………………122
參考文獻 A. English references
Al-Bakri, S. (2013). Problematizing English medium instruction in Oman. Int. J. Bilin.Mult. Teach. Eng, 1(2), 55-69.
Ashwell, T. (2014). Automated scoring for elicited imitation tests. Journal of global media studies, 13, 37-41.
Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: Designing and developing useful language tests (Vol. 1). Oxford University Press.
Baker, C., & Jones, S. P. (Eds.). (1998). Encyclopedia of bilingualism and bilingual education. Clevedon, UK:Multilingual Matters.
Barron, C. (1992). Cultural syntonicity: Co-operative relationships between the ESP unit and other departments. Hong Kong Papers in Linguistics and Language Teaching, 15, 1-14.
Basturkmen, H., & Elder, C. (2004). The practice of LSP. The handbook of applied linguistics, 672-694.
Ben-Eliyahu, A. (2014). On methods: what’s the difference between qualitative and quantitative approaches? The Chronicle of Evidence Based Mentoring.
Retrieved from http://chronicle.umbmentoring.org/on-methods-whats-the-difference-between-qualitative-and-quantitative-approaches/.
Bernstein, J., De Jong, J. H. A. L., Pisoni, D., & Townshend, B. (2000). Two experiments on automatic scoring of spoken language proficiency. Proc. of STIL (Integrating
Speech Technology in Learning), 57-61.
Bley-Vroman, R., & Chaudron, C. (1994). Elicited imitation as a measure ofsecond-language competence. Research methodology in second-language acquisition, 245-261.
Brown, H. G. (2014). Contextual factors driving the growth of undergraduate English-medium instruction programmes at universities in Japan. The Asian Journal of Applied Linguistics, 1(1), 50-63.
Byun, K., Chu, H., Kim, M., Park, I., Kim, S., & Jung, J. (2011). English-medium teaching in Korean higher education: Policy debates and reality. Higher Education, 62(4),
431-449.
Chalak, A., & Kassaian, Z. (2010). Motivation and attitudes of Iranian undergraduate EFL students towards learning English. GEMA: Online Journal of Language Studies, 10(2), 37-56.
Chen, Y. (2011). The institutional turn and the crisis of ESP pedagogy in Taiwan. Taiwan International ESP Journal, 3(1), 17-30.
Christensen, C., Hendrickson, R. & Lonsdale, D. (2010). Principled construction of elicited imitation tests. Paper presented in Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC’10), 233-238. European Language Resources Association.
Clapham, C. (1996). The development of IELTS: A study of the effect of background on reading comprehension (Vol. 4). Cambridge University Press.
Costa, F., & Coleman, J. A. (2013). A survey of English-medium instruction in Italian higher education. International Journal of Bilingual Education and
Bilingualism, 16(1), 3-19.
Cox, T., & Davies, R. S. (2012). Using automatic speech recognition technology with elicited oral response testing. Calico Journal, 29(4), 601-618.
Coxhead, A. (2000). A new academic word list. TESOL Quarterly, 34(2), 213-238.
Crawford, L., & Tindal, G. (2004). Effects of a read-aloud modification on a standardized reading test.Exceptionality, 12(2), 89-106.
Crawford, L., Tindal, G., & Stieber, S. (2001). Using oral reading rate to predict student performance on statewide achievement tests. Educational Assessment, 7(4), 303-323.
Cummins, J. (1976). The influence of bilingualism on cognitive growth: A synthesis of research findings and explanatory hypotheses. Working Papers on Bilingualism, No.
9.
Dasaradhi, K., Raghuram, A. V., Nandamuru, P., Badarinath, P. S., & Vaddeswaram, G. D. A. (2016). Need of ‘proficiency in English’ for engineering graduates. International Journal of English Language, Literature and Humanities, 4(12), 295-307.
Dearden, J. (2014). English as a medium of instruction–a growing global phenomenon. Retrieved from
http://www.britishcouncil.org/education/ihe/knowledge-centre/english-language-higher-education/report-english-medium-instruction.
Deno, S. L. (2003). Developments in curriculum-based measurement. The Journal of Special Education, 37(3), 184-192.
Doiz, A., Lasagabaster, D., & SIERRA, J. (2011). Internationalisation, multilingualism and English‐medium instruction. World Englishes, 30(3), 345-359.
Dolan, R. P., Hall, T. E., Banerjee, M., Chun, E., & Strangman, N. (2005). Applying principles of universal design to test delivery: The effect of computer-based
read-aloud on test performance of high school students with learning disabilities. Journal of Technology, Learning, and Assessment, 3(7). Retrieved from http://files.eric.ed.gov/fulltext/EJ848517.pdf
Douglas, D. (2000). Assessing languages for specific purposes. Cambridge: Cambridge University Press.
Douglas, D., & Selinker, L. (1992). Analyzing oral proficiency test performance in general and specific purpose contexts. System, 20(3), 317-328.
Dudley-Evans, T. (2001). Team-teaching in EAP: Changes and adaptations in the Birmingham approach. In J. Flowerdew & M. Peacock (Eds.), Research perspectives on English for academic purposes (pp. 225-238). Cambridge, England: Cambridge University Press.
Dudley-Evans, T., & St John, M. J. (1998). Developments in English for specific purposes: A multi-disciplinary approach. Cambridge university press.
Ellis, R. (2005). Measuring implicit and explicit knowledge of a second language: A psychometric study. Studies in Second Language Acquisition, 27(2), 141-172.
Erlam, R. (2006). Elicited imitation as a measure of L2 implicit knowledge: An empirical validation study. Applied Linguistics, 27(3), 464-491.
Evans, S., & Morrison, B. (2011). Meeting the challenges of English-medium higher education: The first-year experience in Hong Kong. English for Specific Purposes, 30(3), 198-208.
Fellner, T. (2011). Developing an ESP presentation course for graduate students of science and engineering. 大学教育年報, (7), 1-16.
Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical
analysis. Scientific Studies of Reading, 5(3), 239-256.
Gabrielatos, C. (2002). Reading loud and clear: Reading aloud in ELT. Retrieved from
http://files.eric.ed.gov/fulltext/ED477572.pdf
Gallimore, R., & Tharp, R. G. (1981). The interpretation of elicited sentence imitation in a standardized context. Language Learning, 31(2), 369-392.
Gibson, S. (2008). Reading aloud: a useful learning tool? ELT Journal, 62(1), 29-36.
Graham, C. R. (2006). An analysis of elicited imitation as a technique for measuring oral language proficiency. Paper presented in the Fifteenth International Symposium on English Teaching (pp. 57-67).
Graham, C. R., McGhee, J., & Millard, B. (2010). The role of lexical choice in elicited imitation item difficulty. Paper presented in the 2008 Second Language Research Forum (pp. 57-72).
Griffiths, E. J. (2013). English as a Medium of Instruction in Higher Education Institutions in Norway: a critical exploratory study of lecturers’ perspectives and practices. PhD Thesis University of Exeter. Retrieved from
https://ore.exeter.ac.uk/repository/handle/10871/14538
Hamp-Lyons, L., & Lumley, T. (Eds.). (2001). Assessing language for specific purposes. Edward Arnold.
Hellekjær, G. O. (2009). Academic English reading proficiency at the university level: A Norwegian case study. Reading in a Foreign Language, 21(2), 198.
Henning, G. (1983). Oral proficiency testing: Comparative validities of interview, imitation, and completion methods. Language Learning, 33(3), 315-332.
Hill, Y. Z., & Liu, O. L. (2012). Is there any interaction between background knowledge and language proficiency that affects TOEFL iBT® reading performance?. ETS Research Report Series, 2012(2), i-34.
Hsu, W. (2014). Measuring the vocabulary load of engineering textbooks for EFL undergraduates. English for Specific Purposes, 33, 54-65.
Hu, G., & Alsagoff, L. (2010). A public policy perspective on English medium instruction
in China. Journal of Multilingual and Multicultural Development, 31(4), 365-382.
Huang, Y. H., & Tsou, W. L. (2013). Textbook vocabulary knowledge amongst engineering majors in Taiwan. 課程與教學, 16(2), 201-232.
Huang, Y. P. (2012). Design and implementation of English-medium courses in higher education in Taiwan: A qualitative case study. 英語教學期刊, 36(1), 1-51.
Hutchinson, T., & Waters, A. (1987). English for specific purposes. Cambridge University Press.
Hyland, K., & Hamp-Lyons, L. (2002). EAP: Issues and directions. Journal of English for Academic Purposes, 1(1), 1-12.
Hyland, K. (2006). English for academic purposes: An advanced resource book. London:Routledge.
Ibrahim, J. (2001). The implementation of EMI (English medium instruction) in Indonesian universities: Its opportunities, its threats, its problems, and its possible solutions. Paper presented at International TEFLIN Conference, Petra Christian University.
Jafari, M. (2013). A comparison between reading aloud and silent reading among Iranian EFL learners (Doctoral dissertation, Eastern Mediterranean University (EMU)-Doğu
Akdeniz Üniversitesi (DAÜ)).
Jenkins, J. (2013). English as a lingua franca in the international university: The politics of academic English language policy. London: Routledge.
Jessop, L., Suzuki, W., & Tomita, Y. (2007). Elicited imitation in second language acquisition research. Canadian Modern Language Review, 64(1), 215-238.
Jin, N. Y., Ling, L. Y., Tong, C. S., Sahiddan, N., Philip, A., Azmi, N. H. N., & Tarmizi, M. A. A. (2013). Development of the engineering technology word list for vocational schools in Malaysia. International Education Research, 1(1),43-59.
Joe, Y., & Lee, H. K. (2013). Does English-medium instruction benefit students in EFL contexts? A case study of medical students in Korea. The Asia-Pacific Education Researcher, 22(2), 201-207.
Johnston, V. (2015). The power of the read aloud in the age of the common core. The Open Communication Journal, 9(1), 34-38.
Jordan, R. R. (1997). English for academic purposes: A guide and resource book for teachers. Cambridge University Press.
Kao, S. M. & Liao, S. T. (2016, October). Developing glocalized materials for EMI courses in the humanities. Paper presented in 2016 Conference on EMI Practices in Higher Education, National Cheng Kung University.
Kim, A., Son, Y. D., & Sohn, S. Y. (2009). Conjoint analysis of enhanced English Medium Instruction for college students. Expert Systems with Applications, 36(6), 197-203.
Kim, H. H. (2013). Needs analysis for English for specific purpose course development for engineering students in Korea. International Journal of Multimedia and Ubiquitous Engineering, 8(6), 279-288.
Kim, K. R. (2011). Korean professor and student perceptions of the efficacy of English-medium instruction. Linguistic Research, 28(3), 711-741.
Kırkgöz, Y. (2009). Students’ and lecturers’ perceptions of the effectiveness of foreign language instruction in an English-medium university in Turkey. Teaching in Higher Education, 14(1), 81-93.
Kittidhaworn, P., Deaton, W. L., Phillips, P. D., Bower, D. S., & Fakhri, A. (2001). An assessment of the English-language needs of second-year Thai undergraduate
engineering students in a Thai public university in Thailand in relation to the second-year EAP program in engineering. Ann Arbor, 1001, 48106-1346.
Klaassen, R. G. (2008). Preparing lecturers for English-medium instruction. Realizing Content and Language Integration in Higher Education, 32-42.
Konstantakis, N. (2007). Creating a business word for teaching Business English. Elia: Estudios de Lingüística Inglesa Aplicada, (7), 79-102.
Krekeler, C. (2006). Language for special academic purposes (LSAP) testing: the effect of background knowledge revisited. Language Testing, 23(1), 99-130.
Lazaruk, W. (2007). Linguistic, academic, and cognitive benefits of French immersion. Canadian Modern Language Review, 63(5), 605-627.
Leung, C., & Lewkowicz, J. (2006). Expanding horizons and unresolved conundrums: level. ESP World, 38(14), 1-27.
Li, Y., & Qian, D. D. (2010). Profiling the Academic Word List (AWL) in a financial corpus. System, 38(3), 402-411.
LTTC, (2015). LTTC annual report. Taiwan: The Language Training & Testing Center.
Luoma, S. (2004). Assessing speaking. Cambridge language assessment series.
Cambridge, England: Cambridge University Press.
Maiworm, F., & Wächter, B. (2002). English-language-taught: Degree programmes in European higher education. Bonn: Lemmens.
Mantali, S. M., Talib, R., & Mamu, R. (2013). The application of reading aloud technique to increase students’ pronunciation. Kim Fakultas Sastra dan Budaya, 1(1), 1-16.
Moiinvaziri, M. (2008, December). Motivational orientation in English language learning: A study of Iranian undergraduate students. Paper presented in Global practices of language teaching: Proceedings of the 2008 International Online Language Conference (pp. 126-136).
Muranoi, H. (2000). Focus on form through interaction enhancement: Integrating formal instruction into a communicative task in EFL classrooms. Language Learning, 50(4), 617-673.
Nakata, T. (2008). English vocabulary learning with word lists, word cards and computers: Implications from cognitive psychology research for optimal spaced learning.
ReCALL, 20(01), 3-20.
Nation, I. S. P. (2001). Learning vocabulary in another language. Cambridge: Cambridge University Press.
Neri, A., Cucchiarini, C., & Strik, W. (2003, August). Automatic speech recognition for second language learning: how and why it actually works. Paper presented in Proc. ICPhS (pp. 1157-1160).
Neumeyer, L., Franco, H., Digalakis, V., & Weintraub, M. (2000). Automatic scoring of pronunciation quality. Speech Communication, 30(2), 83-93.
Norris, J. M., & Ortega, L. (2000). Effectiveness of L2 instruction: A research synthesis and quantitative meta‐analysis. Language Learning, 50(3), 417-528.
Northcott, J., & Brown, G. (2006). Legal translator training: Partnership between teachers of English for legal purposes and legal specialists. English for Specific Purposes, 25(3), 358-375.
Nunan, D. (2003). The impact of English as a global language on educational policies and practices in the Asia‐Pacific Region. TESOL Quarterly, 37(4), 589-613.
Oh, H., & Lee, H. (2010). Characteristics of effective English medium instruction and support measures. Modern English Education, 11(1), 191-202.
Owen, N. (2012). Can PTE Academic be used as an exit test for? Retrieved from http://pearsonpte.com/wp-content/uploads/2014/07/Owen_Executive_Summary.pdf
Papadima-Sophocleous, S., & Hadjiconstantinou, S. (2013). Students’ reflections on the effectiveness of their ESAP courses: A multidisciplinary evaluation at tertiary
level. ESP World, 38(14), 1-27.
Patton, M. Q. (2005). Qualitative research. John Wiley & Sons, Ltd.
Readability-score.com. (2012). Retrieved from http://www.readability-score.com/
Rebuschat, P., & Mackey, A. (2013). Prompted production. The Encyclopedia of applied linguistics, vol. 5. Oxford: Wiley-Blackwell.
Rowley, J. (1997). Beyond service quality dimensions in higher education and towards a service contract. Quality Assurance in Education, 5(1), 7-14.
Smit, U., & Dafouz, E. (2012). Integrating content and language in higher education: An introduction to English-medium policies, conceptual issues and research practices
across Europe. AILA Review, 25(1), 1-12.
Suvorov, R., & Hegelheimer, V. (2013). Computer-assisted language testing. In A. J. Kunnan (Ed.), Companion to language assessment (pp. 593–613). Malden, MA:
Wiley-Blackwell.
Suzuki, M., & Harada, Y. (2005). Using speech recognition for an automated test of spoken Japanese. Paper presented in 19th Pacific Asia Conference on Language, Information
and Computation, PACLIC 19.
Tichá, R., Espin, C. A., & Wayman, M. M. (2009). Reading progress monitoring for secondary‐school students: Reliability, validity, and sensitivity to growth of reading‐aloud and maze‐selection measures. Learning Disabilities Research & Practice, 24(3), 132-142.
Tsai, S. C. (2009). Courseware development for semiconductor technology and its application into instruction. Computers & Education, 52(4), 834-847.
Tsou, W. (2009). Needs-based curriculum development: A case study of NCKU’s ESP program. Taiwan International ESP Journal, 1(1), 77-95.
Tsou, W. L. & Kao, S. M. (2016, October). Overview of EMI development. Paper presented in 2016 Conference on EMI Practices in Higher Education, National Cheng Kung University.
Ulanoff, S. H., & Pucci, S. L. (1999). Learning words from books: The effects of read-aloud on second language vocabulary acquisition. Bilingual Research Journal, 23(4), 409-422.
Valian, V. (2015). Bilingualism and cognition. Bilingualism: Language and Cognition, 18(01), 3-24.
Valipouri, L., & Nassaji, H. (2013). A corpus-based study of academic vocabulary in chemistry research articles. Journal of English for Academic Purposes, 12(4),
248-263.
Vinke, A. A., Snippe, J., & Jochems, W. (1998). English‐medium content courses in non English higher education: a study of lecturer experiences and teaching behaviours. Teaching in Higher Education, 3(3), 383-394.
Vinther, T. (2002). Elicited imitation: A brief overview. International Journal of Applied Linguistics, 12(1), 54-73.
Visconde, C. J. (2006). Attitudes of student teachers towards the use of English as language of Instruction for science and mathematics in the Philippines. The Linguistic Journal, 1(3), 7-33.
Wang, J., Liang, S. L., & Ge, G. C. (2008). Establishment of a medical academic word list. English for Specific Purposes, 27(4), 442-458.
Wannagat, U. (2007). Learning through L2–Content and language integrated learning (CLIL) and English as medium of instruction (EMI). International Journal of
Bilingual Education and Bilingualism, 10(5), 663-682.
Weir, C. J., & Wu, J. R. (2006). Establishing test form and individual task comparability: A case study of a semi-direct speaking test. Language Testing, 23(2), 167-197.
Weitze, M., McGhee, J., Graham, C. R., Dewey, D. P., & Eggett, D. L. (2011). Variability in L2 acquisition across L1 backgrounds. Paper presented in the 2009 Second Language Research Forum, 152-163.
West, M., & West, M. P. (Eds.). (1953). A general service list of English words: with semantic frequencies and a supplementary word-list for the writing of popular science and technology. Addison-Wesley Longman Limited.
Wilkinson, R. (2013). English-medium instruction at a Dutch university: Challenges and pitfalls. English-medium instruction at universities: Global challenges, 324.
Williams, D. G. (2015). A systematic review of English Medium Instruction (EMI) and implications for the south Korean higher education context. Retrieved from https://blog.nus.edu.sg/eltwo/files/2015/04/EMI-in-South-Korea_editforpdf-1gmsyy5.pdf.
Wu, W. M., & Stansfield, C. W. (2001). Towards authenticity of task in test development. Language Testing, 18(2), 187-206.
Yan, X., Maeda, Y., Lv, J., & Ginther, A. (2015). Elicited imitation as a measure of second
language proficiency: A narrative review and meta-analysis. Language Testing, doi:10.1177/ 0265532215594643.
Yang, T. L. (2007). Factors affecting EFL teachers' classroom assessment practices of young language learners. Unpublished Ph.D. dissertation, the University of Iowa. Retrieved from ProQuest, http://gradworks.umi.com/32/81/3281422.html.
Yao, K., Yu, D., Seide, F., Su, H., Deng, L., & Gong, Y. (2012, December). Adaptation of context-dependent deep neural networks for automatic speech recognition. Paper presented in Spoken Language Technology Workshop (SLT), 2012 IEEE (pp.366-369).
Zechner, K., Higgins, D., Xi, X., & Williamson, D. M. (2009). Automatic scoring of non-native spontaneous speech in tests of spoken English. Speech Communication, 51(10), 883-895.
B. Chinese References:
Hu, Z. J. (胡志軍), & Liu, Y. S. (劉玉山). “誦”論.東岳論叢, 7, 115-118.
Kao, X. (高霞), Zhu, Z. C. (朱正才), & Yang, H. Z. (楊惠中) (2016). 朗讀在外語教學和測驗中的作用. 外語界, (2), 64-71.
NCKU ESP Program. (2011). ESP: English for general science. 臺北:書林出版有限公司.
The Language Training & Testing Center (LTTC) (財團法人語言訓練中心). Retrieved from https://www.lttc.ntu.edu.tw/
The L Labs Company (艾爾科技). Retrieved from
http://www.myet.com/MyETWeb/PersonalizedPage.aspx
論文全文使用權限
  • 同意授權校內瀏覽/列印電子全文服務,於2020-01-20起公開。


  • 如您有疑問,請聯絡圖書館
    聯絡電話:(06)2757575#65773
    聯絡E-mail:etds@email.ncku.edu.tw