The Electronic Journal of e-Learning provides perspectives on topics relevant to the study, implementation and management of e-Learning initiatives
For general enquiries email administrator@ejel.org
Click here to see other Scholarly Electronic Journals published by API
For a range of research text books on this and complimentary topics visit the Academic Bookshop

Information about the current European Conference on e-Learning is available here

For infomation on the European Conference on Games Based Learning clickhere

 

Journal Article

Pedagogical Approaches and Technical Subject Teaching through Internet Media  pp52-65

Olubodun Olufemi

© Mar 2008 Volume 6 Issue 1, Editor: Shirley Williams, pp1 - 75

Look inside Download PDF (free)

Abstract

This is a comparison of Instructivist and constructivist pedagogical approaches and their applications in different situations, which make clear the comparative advantages of both approaches. Instructivist learning, places the teacher in authority while the constructivist shifted authority to no one in particular but shared responsibilities between learner and teacher in such a manner that the teacher no longer assumes the responsibilities of the passage of informationknowledge to the learner but only guides him to discover the 'objective truth' out there and in the attainment of learning objectives. Teaching and Learning process was redefined in the light of 'new' understanding in teaching and learning and practical applications of these pedagogical approaches were considered. I presented a study guide (Appendix 1) as an example of socio‑constructivist pedagogy where emphasis in on learning rather than on teaching.

 

Keywords: Study guide, e-learning, pedagogy, socio-constructivism, test, evaluation, LMS, virtual classroom, asynchronous, instructivism, construction technique

 

Share |

Journal Article

Enhanced Approach of Automatic Creation of Test Items to foster Modern Learning Setting  pp23-38

Christian Gutl, Klaus Lankmayr, Joachim Weinhofer, Margit Hofler

© Apr 2011 Volume 9 Issue 1, ECEL 2010 special issue, Editor: Carlos Vaz de Carvalho, pp1 - 114

Look inside Download PDF (free)

Abstract

Research in automated creation of test items for assessment purposes became increasingly important during the recent years. Due to automatic question creation it is possible to support personalized and self‑directed learning activities by preparing appropriate and individualized test items quite easily with relatively little effort or even fully automatically. In this paper, which is an extended version of the conference paper of Gütl, Lankmayr and Weinhofer (2010), we present our most recent work on the automated creation of different types of test items. More precisely, we describe the design and the development of the Enhanced Automatic Question Creator (EAQC) which extracts most important concepts out of textual learning content and creates single choice, multiple‑choice, completion exercises and open ended questions on the basis of these concepts. Our approach combines statistical, structural and semantic methods of natural language processing as well as a rule‑based AI solution for concept extraction and test item creation. The prototype is designed in a flexible way to support easy changes or improvements of the above mentioned methods. EAQC is designed to deal with multilingual learning material and in its recent version English and German content is supported. Furthermore, we discuss the usage of the EAGC from the users’ viewpoint and also present first results of an evaluation study in which students were asked to evaluate the relevance of the extracted concepts and the quality of the created test items. Results of this study showed that the concepts extracted and questions created by the EAQC were indeed relevant with respect to the learning content. Also the level of the questions and the provided answers were appropriate. Regarding the terminology of the questions and the selection of the distractors, which had been criticized most during the evaluation study, we discuss some aspects that could be considered in the future in order to enhance the automatic generation of questions. Nevertheless the results are promising and suggest that the quality of the automatically extracted concepts and created test items is comparable to human generated ones.

 

Keywords: e-assessment, automated test item creation, distance learning, self-directed learning, natural language processing, computer-based assessment

 

Share |

Journal Article

Employing Online S‑P Diagnostic Table for Qualitative Comments on Test Results  pp263-271

Chien-hwa Wang, Cheng-ping Chen

© Aug 2013 Volume 11 Issue 3, ECEL 2012, Editor: Hans Beldhuis and Koos Winnips, pp168 - 272

Look inside Download PDF (free)

Abstract

Abstract: The major concerns of adaptive testing studies have concentrated on effectiveness and efficiency of the system built for the research experiments. It has been criticised that such general information has fallen short of providing qualitative descriptions regarding learning performance. Takahiro Sato of Japan proposed an analytical diagram called Student‑Problem Chart (S‑P Chart) in the 1970. The S‑P Chart is able to establish a learning diagnostic table which comments student learning performance in a verbal form. The advancement of computer technology has made the S‑P analytical process more applicable for school teachers. This study examined how online comments provided by the S‑P diagnostic table could affect the students’ learning attitude. One hundred sixth grade students were selected to be the subjects of the study. An online embedded test was given to the subjects and an S‑P diagnostic table was drawn by the computer to display instant comments on each student’s learning performance. A Questionnaire survey and in‑depth interviews were performed after the experiment. Results indicated that students liked the online qualitative comments. This is because students were able to instantly understand why they performed well/poor in the test, which is much beyond the numerical scores can explain. The results also showed that the online S‑P diagnostic table made students more circumspect on answering the test questions in order to reduce careless mistakes. Students would also be more likely to review what missed on the test. However, the S‑P comment table seemed to have no effect on improving their learning performance. An online iterative drilling platform was consequently built to incorporate with the S‑P diagnostic process to assist poorly performed students. It may effectively work with the S‑P diagnostic process to provide constructive remediation for the students who exhibited a poor performance on the S‑P chart.

 

Keywords: Keywords: adaptive test, the student-problem chart, learning attitude, iterative drilling

 

Share |

Journal Article

Mitigating the Mathematical Knowledge gap Between High School and First Year University Chemical Engineering Mathematics Course  pp68-83

Moses Basitere, Eunice Ivala

© Feb 2015 Volume 13 Issue 2, ICEL2014, Editor: Paul Griffiths, pp57 - 148

Look inside Download PDF (free)

Abstract

Abstract: This paper reports on a study carried out at a University of Technology, South Africa, aimed at identifying the existence of the mathematical knowledge gap and evaluating the intervention designed to bridge the knowledge gap amongst students stu dying first year mathematics at the Chemical Engineering Extended Curriculum Program (ECP). In this study, a pre‑test was used as a diagnostic test to test incoming Chemical Engineering students, with the aim of identifying the mathematical knowledge ga p, and to provide students with support in their starting level of mathematical knowledge and skills. After the diagnostic test, an intervention called the autumn school was organized to provide support to bridge the mathematical knowledge gap identified. A closed Facebook group served as a platform for providing student support after school hours. After the autumn school, a post‑test was administered to measure whether there was an improvement in the knowledge gap. Both quantitative and qualitative metho ds of collecting data were used in this study. A pre‑test was used to identify the mathematical knowledge gap, while a post‑test was employed to measure whether there was a decrease in the knowledge gap after the intervention. Focus group interviews were carried out with the students to elicit their opinions on whether the intervention was of any help for them. Students participation on Facebook in terms of student post, post comments and likes and an evaluation of students academic performance in comp arison to their Facebook individual participation was also conducted. Quantitative data was analysed using descriptive statistics, while qualitative data was analysed using inductive strategy. Results showed that all the students in this study had the mat hematical knowledge gap as no student in the class scored 50% on the overall pre‑test. Findings further revealed that the intervention played a major role in alleviating the mathematical knowledge gap from some of the students (with 1/3 of the students s coring 50% and above in the post‑test) and no positive correlation between students academic performance on the post‑test and students participation in the Facebook group was noted. We hope that insights generated in this study will be of help to other institutions looking into designing interventions for bridging the knowledge gap. Reasons for lack of improvement in the knowledge gap of 2/3 of the students in this class will be highlighted.

 

Keywords: Keywords: knowledge gap, extended curriculum program, descriptive statistics, inductive strategy, diagnostic test, autumn school, Facebook closed group

 

Share |

Journal Article

The Scoring of Matching Questions Tests: A Closer Look  pp268-276

Antonín Jančařík, Yvona Kostelecká

© Apr 2015 Volume 13 Issue 4, ECEL 2014, Editor: Kim Long, pp205 - 315

Look inside Download PDF (free)

Abstract

Abstract: Electronic testing has become a regular part of online courses. Most learning management systems offer a wide range of tools that can be used in electronic tests. With respect to time demands, the most efficient tools are those that allow automa tic assessment. The presented paper focuses on one of these tools: matching questions in which one question can be paired with multiple response terms. The aim of the paper is to identify how the types of questions used in a test can affect student result s on such tests expressed as test scores. The authors focus mainly on the issue of the possible increase in scores that can occur with the use of closed questions, when students, after selecting the answers to the questions they know the correct answers t o, then guess the answers to the remaining questions (see Diamond and Evans, 1973, Ebel and Frisbie, 1986, Albanese, 1986). The authors show how the number of distractors (unused answers) included in a question influences the overall test score. The d ata on multiple‑choice and alternative‑ response tests are well‑known. But not much is known about matching questions. Estimating formula scores for matching‑question tests is important for determining the threshold at which students demonstrate they poss ess the required level of knowledge. Here the authors will compare the scores obtained for three types of closed questions: multiple choice, alternative response and matching questions. The analysis of matching assignments in this paper demonstrates that they are a useful tool for testing skills. However, this holds only if the assignment has at least two distractors. Then the informational value of this type of assignment is higher than that of multiple‑choice assignments with three distractors. The resu lts currently indicate that these types of assignment are not useful if the objective of the testing is to rank students or to distinguish between very good students … and this applies even if two distractors are used. In the case of such an objective, it is better to use multiple‑choice assignments.

 

Keywords: Keywords: testing, random score, test results, matching type, score formula, formula scoring

 

Share |

Journal Article

Students' Perceived Usefulness of Formative Feedback for a Computer‑adaptive Test  pp31-38

Mariana Lilley, Trevor Barker

© Mar 2007 Volume 5 Issue 1, ECEL 2006, Editor: Shirley Williams, pp1 - 86

Look inside Download PDF (free)

Abstract

In this paper we report on research related to the provision of automated feedback based on a computer adaptive test (CAT), used in formative assessment. A cohort of 76 second year university undergraduates took part in a formative assessment with a CAT and were provided with automated feedback on their performance. A sample of students responded in a short questionnaire to assess their attitude to the quality of the feedback provided. In this paper, we describe the CAT and the system of automated feedback used in our research, and we also present the findings of the attitude survey. On average students reported that they had a good attitude to our automated feedback system. Statistical analysis was used to show that attitude to feedback was not related to performance on the assessment (p>0.05). We discuss this finding in the light of the requirement to provide fast, efficient and useful feedback at the appropriate level for students.

 

Keywords: computer-assisted assessment, formative assessment, adaptive testing

 

Share |

Journal Article

The Role of Essay Tests Assessment in e‑Learning: A Japanese Case Study  pp173-178

Minoru Nakayama, Hiroh Yamamoto

© Mar 2010 Volume 8 Issue 2, ECEL 2009, Editor: Shirley Williams, Florin Salajan, pp51 - 208

Look inside Download PDF (free)

Abstract

e‑Learning has some restrictions on how learning performance is assessed. Online testing is usually in the form of multiple‑choice questions, without any essay type of learning assessment. Major reasons for employing multiple‑choice tasks in e‑learning include ease of implementation and ease of managing learner's responses. To address this limitation in online assessment of learning, this study investigated an automatic assessment system as a natural language processing tool for conducting essay‑type tests in online learning. The study also examined the relationship between learner characteristics and learner performance in essay‑testing. Furthermore, the use of evaluation software for scoring Japanese essays was compared with experts' assessment and scoring of essay tests. Students were enrolled in two‑unit courses which were taught by the same professor as follows: hybrid learning course at bachelor's level, fully online course at bachelor's level, and hybrid learning course at masters level. All students took part in the final test which included two essay‑tests at the end of course, and received the appropriate credit units. Learner characteristics were measured using five constructs: motivation, personality, thinking styles, information literacy and self‑assessment of online learning experience. The essay‑tests were assessed by two outside experts. They found the two essay‑tests to be sufficient for course completion. Another score, which was generated using assessment software, consisted of three factors: rhetoric, logical structure and content fitness. Results show that experts' assessment significantly correlates with the factor of logical structure on the essay for all courses. This suggests that expert evaluation of the essay is focused on logical structure rather than other factors. When comparing the score of experts' assessment between hybrid learning and fully online courses at the bachelor's level, no significant differences were found. This indicates that in fully online learning, as well as in hybrid learning, learning performance can be measured using essay tests without the need for a face‑to‑face session to conduct this type of assessment.

 

Keywords: online learning, essay-testing, learner characteristics, learning performance

 

Share |

Journal Issue

Volume 6 Issue 1 / Mar 2008  pp1‑75

Editor: Shirley Williams

View Contents Download PDF (free)

Editorial

A new issue of EJEL brings seven interesting pieces of research from different countries around the world. The learners involved in these researches range from school children to mature postgraduate students; they are of a variety of nationalities, they have differing previous experience and are of both genders. The learners have different modes of working; on‑campus or at a distance, and the educators have a variety of approaches and strategies to meet the difficulties their learners face. Reading these papers gives an insight to the challenges that the e‑Learning community faces. Overwhelmingly I am left with the view that there is no one‑size‑fits‑all in e‑Learning; we must be prepared to consider the individual if e‑Learning is to succeed.

 

Keywords: Asynchronous, community participation, construction technique, culture, curriculum development, distance learning, diversity, e-learning, engagement, evaluation, flexible learning, Greece, higher education, ICT, information and communication technology, instructional design, instructivism, international, LMS, Marginalized, online courses, online evaluation, online learning, participation, pedagogical development., postgraduate studies, quality assessment, secondary, socio-constructivism, study guide, test, time-management, virtual classroom, widening participation

 

Share |