In 2004 the Australian Flexible Learning Framework developed a suite of quantitative and qualitative indicators on the uptake, use and impact of e‑learning in the Vocational Education and Training (VET) sector. These indicators were used to design items for a survey to gather quantitative data for benchmarking. A series of four surveys gathered data from VET providers, teachers, students and their employers. The data formed baseline indicators that were used to establish organisational goals and benchmarks for e‑learning. These indicators were the first known set for benchmarking e‑learning in Australia. The case studies in this paper illustrate ways in which VET providers have approached e‑learning benchmarking, the benefits achieved and the lessons that they learned. The cases exemplify how VET providers have adapted the baseline indicators, how the indicators informed organisational plans and e‑learning outcomes. The benefits of benchmarking are categorised under three purposes: reporting, performance management, and service improvement. A set of practical strategies is derived from the cases for consideration by other organisations interested in benchmarking e‑learning services.
Keywords: e-learning indicators, e-learning uptake and outcomes, benchmarks, planning for e-learning benchmarking, case studies
As the demand for online learning environments grow in higher education, so does the need for systematic application of learning and educational theory to the design, development and delivery of assessment strategies within these environments. However, there is little guidance in the form of principled design frameworks that can assist the design practitioner in the development of online assessment strategies. From four cases, we have identified six design principles that represent the collective experience of our team of design practitioners in creating assessment strategies for online teaching and learning environments; (a) technology affordances, (b) alignment of objectives with assessment, (c) discipline‑specific practices and approaches, (d) meaningful and timely feedback, (e) authenticity and transferability and (f) transparency of assessment criteria. We present in‑situ qualitative case studies that articulate how these principles have informed our design practice in online assessment strategy development.