The Electronic Journal of e-Learning provides perspectives on topics relevant to the study, implementation and management of e-Learning initiatives
For general enquiries email administrator@ejel.org
Click here to see other Scholarly Electronic Journals published by API
For a range of research text books on this and complimentary topics visit the Academic Bookshop

Information about the current European Conference on e-Learning is available here

For infomation on the European Conference on Games Based Learning clickhere

 

Journal Article

Benefits of e‑Learning Benchmarks: Australian Case Studies  pp11-20

Sarojni Choy

© Mar 2007 Volume 5 Issue 1, ECEL 2006, Editor: Shirley Williams, pp1 - 86

Look inside Download PDF (free)

Abstract

In 2004 the Australian Flexible Learning Framework developed a suite of quantitative and qualitative indicators on the uptake, use and impact of e‑learning in the Vocational Education and Training (VET) sector. These indicators were used to design items for a survey to gather quantitative data for benchmarking. A series of four surveys gathered data from VET providers, teachers, students and their employers. The data formed baseline indicators that were used to establish organisational goals and benchmarks for e‑learning. These indicators were the first known set for benchmarking e‑learning in Australia. The case studies in this paper illustrate ways in which VET providers have approached e‑learning benchmarking, the benefits achieved and the lessons that they learned. The cases exemplify how VET providers have adapted the baseline indicators, how the indicators informed organisational plans and e‑learning outcomes. The benefits of benchmarking are categorised under three purposes: reporting, performance management, and service improvement. A set of practical strategies is derived from the cases for consideration by other organisations interested in benchmarking e‑learning services.

 

Keywords: e-learning indicators, e-learning uptake and outcomes, benchmarks, planning for e-learning benchmarking, case studies

 

Share |

Journal Article

Measuring Success in e‑Learning — A Multi‑Dimensional Approach  pp99-110

Malcolm Bell, Stephen Farrier

© Apr 2008 Volume 6 Issue 2, Editor: Shirley Williams, pp99 - 182

Look inside Download PDF (free)

Abstract

In 1999 Northumbria University published a strategy document entitled "Towards the web‑enabled University". This prefaced an assessment of need and of available platforms for developing online teaching and learning which, in turn, led in 2001 to the roll out and institution‑wide adoption of the Blackboard Virtual Learning Environment (VLE) now referred to as our eLearning Platform or eLP. Within a very few years we had over 90% take‑up by academic staff and the eLP had become integral to the learning of virtually all our students. What has always been relatively easy to measure has been the number of users, frequency of use, number of courses, levels of technological infrastructure, etc. However, with the publication of the Higher Education Funding Council for England (HEFCE) e learning strategy in 2005 it became apparent that such quantitative data was not particularly helpful in measuring how the university matched onto the 10‑year aspirations of that document and its measures of success. Consequently an on‑going exploration was embarked upon to try to measure where we were and what we should prioritise in order to embed e‑learning, as envisaged within the HEFCE strategy. This involved a number of key approaches: The measures were broken down into manageable sizes, creating sixteen measures in all with descriptors for "full achievement" through to "no progress to date" with suggested sources of information which would support the description. A series of interviews with key staff were set up in which they were asked to rank where they felt the university stood against each measure and what evidence would support their views. An academic staff survey was developed on‑line which invited staff to explore a number of statements based around the HEFCE criteria and express degrees of agreement. This was followed up by a range of face‑to‑face interviews. An online student survey was developed and students were asked to express degrees of agreement with these. Student responses were followed up with an independent student focus group exploring issues in greater depth. The outcomes of the three approaches were then combined and an interim report prepared which identified strengths and areas for further development. Some of the latter are already being addressed. Subsequently, the university joined phase 2 of a national benchmarking e‑learning in Higher Education exercise, running from May to December 2007, supported by the Higher Education Academy (HEA) and the Joint Information Systems Committee (JISC). During this exercise we engaged in a deeper exploration against a wider set of criteria, based upon the "Pick & Mix" (Bacsich, 2007) methodology. Pick&Mix comprises 20 core criteria and the option of a number of supplementary criteria. Through these approaches we will be able to set a baseline for where we currently are and it will allow us to revisit criteria later to measure our progress in those areas we identify for development. This paper shares methodologies used, identifies key outcomes and reflects upon those outcomes from both an institutional and sectoral perspective.

 

Keywords: measuring, benchmarking, methodology

 

Share |

Journal Article

Piloting a Process Maturity Model as an e‑Learning Benchmarking Method  pp49-58

Jim Petch, Gayle Calverley, Hilary Dexter, Tim Cappelli

© Mar 2007 Volume 5 Issue 1, ECEL 2006, Editor: Shirley Williams, pp1 - 86

Look inside Download PDF (free)

Abstract

As part of a national e‑learning benchmarking initiative of the UK Higher Education Academy, the University of Manchester is carrying out a pilot study of a method to benchmark e‑learning in an institution. The pilot was designed to evaluate the operational viability of a method based on the e‑Learning Maturity Model developed at the University of Wellington, New Zealand, which, in turn was derived from Carnegie Mellon's widely accepted Capability Maturity Model. The method is based on gathering evidence about the many and interdependent processes in the e‑learning and student lifecycles and takes a holistic view of maturity, addressing multiple aspects. This paper deals with the rationale for the selected method and explains the adoption of a process based approach. It describes the iterative refinement of the questionnaire used to elicit evidence for measures of five aspects of maturity in a range of e‑learning processes, in five process areas. The pilot study will produce a map of evidence of e‑learning practice across the processes matrix and a measure of the degree of embedding in a sample of faculties within the institution expressed as capability and maturity. To provide a useful measure of where an organisation is with respect to a particular aspect of e‑learning, it needs to be able to act on that measure, finding any new activities required or modifying current activities to improve its processes. The pilot study aims to evaluate the potential for improvement inherent in the capability maturity model and to examine the resource implications of obtaining useful evidence. A successful benchmarking effort should be able to inform an institution's planning and resourcing processes and the outcomes of this pilot should lead to an informed decision about a method for benchmarking the embedding of e‑learning, both for the particular institution and for the sector, which in turn can lead to operational suggestions for improvement.

 

Keywords: embedding, e-learning, process, maturity, benchmarking

 

Share |