Institutional-Repository, University of Moratuwa.  

An automatic classifier for exam questions in engineering : a process for Bloom's taxonomy

Show simple item record

dc.contributor.author Jayakodi, K
dc.contributor.author Bandara, M
dc.contributor.author Perera, GIUS
dc.date.accessioned 2019-08-16T10:26:32Z
dc.date.available 2019-08-16T10:26:32Z
dc.identifier.uri http://dl.lib.mrt.ac.lk/handle/123/14795
dc.description.abstract Assessment is an essential activity to achieve the objective of the course being taught and to improve the teaching and learning process. There are several educational taxonomies that can be used to assess the efficacy of assessment in engineering learning by aligning the assessment tasks in line with the intended learning outcomes and teaching and learning activities. This research is focused on using a learning taxonomy that fits well for computer science and engineering to categorize and assign weights to exam questions according to the taxonomy levels. Existing Natural Language Processing (NLP) techniques, Wordnet similarity algorithms with NLTK and Wordnet package were used and a new set of rules were developed to identify the category and the weight for each exam question according to Bloom's taxonomy. Using the result the evaluators can analyze and design the question papers to measure the student knowledge from various aspects and levels. Prior evaluation was conducted to identify most suitable NLP preprocessing techniques to the context. A sample set of end semester examination questions of the Department of Computer science and Engineering (CSE), University of Moratuwa was used to evaluate the accuracy of the question classification; weight assignment and the main category assignment were validated against the manual classification by a domain expert. The outcome of classification is a set of weights assigned under each taxonomy category, indicating the likelihood of a question to fall into a certain category. The highest weight category was considered as the main category of the exam question. According to the generated rule set the accuracy of detecting the correct main category of a question is 82%. en_US
dc.language.iso en en_US
dc.subject Question classification; Assessment in Engineering; Teaching and Supporting Learning; Bloom’s taxonomy; Learning Analytics; Natural Language Processing en_US
dc.title An automatic classifier for exam questions in engineering : a process for Bloom's taxonomy en_US
dc.type Conference-Abstract en_US
dc.identifier.faculty Engineering en_US
dc.identifier.department Department of Computer Science and Engineering en_US
dc.identifier.year 2015 en_US
dc.identifier.conference IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE - 2015) en_US
dc.identifier.place Zhuhai, China en_US
dc.identifier.pgnos pp. 195 - 202 en_US
dc.identifier.doi 10.1109/TALE.2015.7386043 en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record