Conference Session – Wednesday

Wednesday, March 15, 2017

Morning Sessions (10:30 AM – 12:00 PM)

Click to Expand

Session: Keynote Discussion and Panel

Keynote Q & A 1A1 (30 min):
Panel 1A2 Round Table (55 min):

Session: LA Infrastructure

Presentation 1B1 (30 min):“Crossing the Chasm to Big Data: Early Detection of at-Risk Students in a Cluster Computing Environment”
(Practitioner Presentation)
by Eitel Lauria, Edward Presutti, Mathew Sokoloff & Michael Guarino 
Abstract:
This work describes the ongoing redesign of the early detection framework originally developed at Marist College as part of the Open Academic Analytics Initiative, a multi-year EDUCAUSE/NGLC research grant funded by the Gates Foundation and focused on impacting the student retention rates. The early detection framework uses machine learning to identify at early stages of the semester the student population who are potentially at academic risk. The paper specifically tackles the migration of the early detection framework from a single node architecture to a big data, cluster computing architecture using Apache Hadoop and Spark.
Presentation 1B2 (30 min): “SURF Learning Analytics experiment: Hands-on experience for Dutch Higher education”
(Practitioner Presentation)
by Jocelyn Manderveld, Herman van Dompseler & Marieke de Wit
Abstract:
In 2016 SURFnet started the Learning Analytics Experiment for Dutch institutes for higher education to gain hands-on experience with learning analytics. With this experiment, SURFnet wishes to demonstrate the possibilities of learning analytics in education. By carrying out this experiment, educational institutions can answer the following questions: Is learning analytics really so complicated? How does learning analytics fit into an educational infrastructure? How do you collect data? How do you visualise data? In this paper we present the set-up of the Learning Analytics experiment, the learning analytics architecture and infrastructure used and the institutes who participate in the experiment.
Presentation 1B3 (20 min): “Developing a MOOC experimentation platform: Insights from a user study”
(Short Research Paper)
by Vitomir Kovanović, Srećko Joksimović, Philip Katerinopoulos, Charalampos Michail, George Siemens & Dragan Gašević
Abstract:
In 2011, the phenomenon of MOOCs had swept the world of education and put online education in the focus of the public discourse around the world. Although researchers were excited with the vast amounts of MOOC data being collected, the benefits of this data did not stand to the expectations due to several challenges. The analyses of MOOC data are very time-consuming and labor-intensive, and require and require a highly advanced set of technical skills, often not available to the education researchers. Because of this MOOC data analyses are rarely done before the courses end, limiting the potential of data to impact the student learning outcomes and experience.
In this paper we introduce MOOCito (MOOC intervention tool), a user-friendly software platform for the analysis of MOOC data, that focuses on conducting data-informed instructional interventions and course experimentations. We cover important design principles behind MOOCito and provide an overview of the trends in MOOC research leading to its development. Although a work-in-progress, in this paper, we outline the prototype of MOOCito and the results of a user evaluation study that focused on system’s perceived usability and ease-of-use. The results of the study are discussed, as well as their practical implications

Session: Modelling Student Behaviour

Presentation 1C1 (30 min): “Detecting Changes in Student Behavior from Clickstream Data”
(Full Research Paper)
by Jihyun Park, Kameryn Denaro, Fernando Rodriguez, Padhraic Smyth & Mark Warschauer
Abstract:
Student clickstream data can provide valuable insights about student activities in an online learning environment and how these activities inform their learning outcomes. However, given the noisy and complex nature of this data, an ongoing challenge involves devising statistical techniques that capture clear and meaningful aspects of students’ click patterns. In this paper, we utilize statistical change detection techniques to investigate students’ online behaviors. Using clickstream data from two large university courses, one face-to-face and one online, we illustrate how this methodology can be used to detect when students change their previewing and reviewing behavior, and how these changes can be related to other aspects of students’ activity and performance.
Presentation 1C2 (30 min): “Modeling Exploration Strategies to Predict Student Performance within a Learning Environment and Beyond”
(Full Research Paper)
by Tanja Käser, Nicole R. Hallinen & Daniel L. Schwartz
Abstract:
Modeling and predicting student learning is an important task in computer-based education. A large body of work has focused on representing and predicting student knowledge accurately. Existing techniques are mostly based on students’ performance and on timing features. However, research in education, psychology and educational data mining has demonstrated that students’ choices and strategies substantially influence learning. In this paper, we investigate the impact of students’ exploration strategies on learning and propose the use of a probabilistic model jointly representing student knowledge and strategies. Our analyses are based on data collected from an interactive computer-based game. Our results show that exploration strategies are a significant predictor of the learning outcome. Furthermore, the joint models of performance and knowledge significantly improve the prediction accuracy within the game as well as on external post-test data, indicating that this combined representation provides a better proxy for learning.
Presentation 1C3 (20 min): “Opportunities for Personalization in Modeling Students as Bayesian Learners”
(Short Research paper)
by Charles Lang
Abstract:
The following paper is a proof-of-concept demonstration for a novel Bayesian framework for making inferences about individual students and the context in which they are learning. It has implications for both efforts to automate personalized instruction and to probabilistically model educational context. By modelling students as Bayesian learners, individuals who weigh their prior belief against current circumstantial data to reach conclusions, it becomes possible to both generate estimates of performance and the impact of the educational environment in probabilistic terms. This framework is tested through a novel Bayesian algorithm that can be used to characterize student prior knowledge in course material and predict student performance. This is demonstrated using both simulated data and data from an online intelligent tutoring system. The algorithm generates estimates that behave qualitatively as expected on simulated data and predict student performance as well as Bayesian Knowledge Tracing on real-world data. A discussion of the results and the conceptual benefits of the framework follow.

Session: Students at-Risk – Studies

Presentation 1D1 (30 min):“Ouroboros: Early identification of at-risk students without models based on legacy data”
(Full Research Paper)
by Martin Hlosta, Zdenek Zdrahal & Jaroslav Zendulka
Abstract:
This paper focuses on the problem of identifying students, who are at risk of failing their course. The presented method proposes a solution in the absence of data from previous courses, which are usually used for training machine learning models. This situation typically occurs in new courses. We present the concept of a ”self-learner” that builds the machine learning models from the data generated during the current course. The approach utilises information about already submitted assessments, which introduces the problem of imbalanced data for training and testing the classification models.
There are three main contributions of this paper: (1) the concept of training the models for identifying at-risk students using data from the current course, (2) specifying the problem as a classification task, and (3) tackling the challenge of imbalanced data, which appears both in training and testing data.
The results show the comparison with the traditional approach of learning the models from the legacy course data, validating the proposed concept.
Presentation 1D2 (30 min): “What can learning analytics data from an LMS tell us that assessment results can’t?”
(Practitioner Presentation)
by Karl Cottenie, Shoshanah Jacobs & Claire Coulter
Abstract:
Learning analytics software packages, some of which that make use of student activities recorded through the learning management system, are being developed for use by academic institutions to facilitate identification of students at risk towards timely intervention. It has been recommended that the reliability and feasibility of such systems be tested prior to implementation. We initiated a project to explore the functionality of a student analytics platform in a study with several large first year undergraduate classes across the disciplines. The primary objectives of the project were to 1. evaluate the ability of the tool to predict final student performance using activities from different stages of the semester, and across a variety of domains, and in comparison to measures based on assessment results alone; 2. assess the appropriateness of using this system as an early warning system.
Presentation 1D3 (20 min): “Impact of Student Choice of Content Adoption Delay on Course Outcomes”
(Short Research Paper)
by Lalitha Agnihotri, Alfred Essa & Ryan Baker
Abstract:
It is difficult for a student to succeed in a course without access to course materials and assignments; and yet, some students delay up to a month in obtaining access to these essential materials. Students delay buying material required for their course due to multiple reasons. Out of a concern for students with limited financial resources, some publishers offer a period of free courtesy access. But this may lead to students having access later in the course but then having a lapsed period until they pay for the materials after the courtesy access period ends. Not having key course materials early on probably hurts learning, but how much? In this paper, we investigate the question, ”Does lack of access to instructional material impact student performance in blended learning courses?” Specifically, we analyze students who purchased and obtained access to online content at different points in the course. We determine that both types of failure to obtain access to course materials (delaying in signing up for the product, or signing up for a free trial and letting the trial period lapse without purchasing the materials) are associated with substantially worse student outcomes. Students who purchased the product within the first few days of class had the best scores (median 77). Those who waited two weeks before accessing the product did the worst (median 56, effect size Cliff ’s Delta=0.31). We conclude with a discussion of possible interventions and actions that can be taken to ameliorate the situation.

Early Afternoon Sessions (01:00 PM – 02:30 PM)

Click to Expand

Session: Improving Learning

Presentation 2A1 (30 min): “How to Assign Students into Sections to Raise Learning”
(Full Research Paper)
by Ming Chiu, Bonnie Chow & Sung Wook Joh
Abstract:
Grouping students with similar past achievement together (tracking) might affect their reading achievement. Multilevel analyses of 208,057 fourth grade students in 40 countries showed that clustering students in schools by past achievement was linked to higher reading achievement, consistent with the benefits of customized, targeted instruction. Meanwhile, students had higher reading achievement with greater differences (variances) among classmates’ past achievement, reading attitudes, or family SES; these results are consistent with the view that greater student differences yield more help opportunities (higher achievers help lower achievers, so that both learn), and foster learning from their different resources, attitudes and behaviors. Also, a student had higher reading achievement when classmates had more resources (SES, home educational resources, reading attitude, past achievement), suggesting that classmates shared their resources and helped one another. Modeling of non-linear relations and achievement subsamples of students supported the above interpretations. Principals can use these results and a simpler version of this methodology to reallocate students and resources into different course sections at little cost to improve students’ reading achievement.
Presentation 2A2 (30 min): “Improving Learning through Achievement Priming in Crowdsourced Information Finding Microtasks”
(Full Research Paper)
by Ujwal Gadiraju & Stefan Dietze
Abstract:
Crowdsourcing has become an increasingly popular means to acquire human input on demand. Microtask crowdsourcing marketplaces facilitate the access to millions of people (called workers) who are willing to participate in tasks in return for monetary rewards or other forms of compensation. This paradigm presents a unique learning context where workers have to learn to complete tasks on-the-fly by applying their learning immediately through the course of tasks. However, most workers typically dropout early in large batches of tasks, depriving themselves of the opportunity to learn on-the-fly through the course of batch completion. By doing so workers squander a potential chance at improving their performance and completing tasks effectively. In this paper, we propose a novel method to engage and retain workers in crowdsourced information finding tasks by using achievement priming. Through rigorous experimental findings, we show that it is possible to retain workers in long batches of tasks by triggering their inherent motivation to achieve and excel. As a consequence of increased worker retention, we find that workers learn to perform more effectively.
Presentation 2A3 (20 min): “Exploring the Asymmetry of Metacognition”
(Short Research Paper)
by Ani Aghababyan, Nicholas Lewkow & Ryan Baker
Abstract:
People in general and students in particular have a tendency to misinterpret their own abilities. Some tend to underestimate their skills, while others tend to overestimate them. This paper investigates the degree to which metacognition is asymmetric in real-world learning and examines the change of a students’ confidence over the course of a semester and its impact on the students’ academic performance. Our findings, conducted using 129,644 students learning in eight courses within the LearnSmart platform, indicate that poor or unrealistic metacognition is asymmetric. These students are biased in one direction: they are more likely to be overconfident than underconfident. Additionally, while the examination of the temporal aspects of confidence reveals no significant change throughout the semester, changes are more apparent in the first and the last few weeks of the course. More specifically, there is a sharp increase in underconfidence and a simultaneous decrease in realistic evaluation toward the end of the semester. Finally, both overconfidence and underconfidence seem to be correlated with students’ overall course performance. An increase in overconfidence is related to higher overall performance, while an increase in underconfidence is associated with lower overall performance.

Session: Understanding Discourse I

Presentation 2B1 (30 min): “Temporal Analytics with Discourse Analysis: Tracing Ideas and Impact on Communal Discourse”
(Full Research Paper)
by Alwyn Vwen Yen Lee & Seng Chee Tan
Abstract:
This study integrates social network analysis and temporal analytics to reveal key ideas in participants’ discourse, and to trace the origins of these promising ideas in a context of a knowledge building classroom where improving one another’s idea is the collective goal.
Presentation 2B2 (30 min): “Dynamics of MOOC Discussion Forums”
(Full Research Paper)
by Mina Shirvani Boroujeni, Tobias Hecking, H. Ulrich Hoppe & Pierre Dillenbourg
Abstract:
In this integrated study of dynamics in MOOCs discussion forums, we analyze the interplay of temporal patterns, discussion content, and the social structure emerging from the communication using mixed methods. A special focus is on the yet under-explored aspect of time dynamics and influence of the course structure on forum participation. Our analyses show dependencies between the course structure (video opening time and assignment deadlines) and the overall forum activity whereas such a clear link could only be partially observed considering the discussion content. For analyzing the social dimension we apply role modeling techniques from social network analysis. While the types of user roles based on connection patterns are relatively stable over time, the high fluctuation of active contributors lead to frequent changes from active to passive roles during the course. However, while most users do not create many social connections they can play an important role in the content dimension triggering discussions on the course subject. Finally, we show that forum activity level can be predicted one week in advance based on the course structure, forum activity history and attributes of the communication network which enables identification of periods when increased tutor supports in the forum is necessary.
Presentation 2B3 (20 min): “Assessment of Language in Authentic Science Inquiry Reveals Putative Differences in Epistemology”
(Short Research Paper)
by Melanie Peffer & Kristopher Kyle
Abstract:
Science epistemology, or beliefs about what it means to do science and how science knowledge is generated, is an integral part of authentic science inquiry. Although the development of a sophisticated science epistemology is critical for attaining science literacy, epistemology remains an elusive construct to precisely and quantitatively evaluate. Previous work has suggested that analysis of student practices in science inquiry, such as their use of language, may be reflective of their underlying epistemologies. Here we describe the usage of a learning analytics tool, TAALES, and keyness analysis to analyze the concluding statements made by students at the end of a computer-based authentic science inquiry experience. Preliminary results indicate that linguistic analysis reveals differences in domain-general lexical sophistication and in domain-specific verb usage that are consistent with the expertise level of the participant. For example, experts tend to use more hedging language such as “may” and “support” during conclusions whereas novices use stronger language such as “cause.” Using these differences, a simple, rule-based prediction algorithm with LOOCV achieved prediction accuracies of greater than 80%. These data underscore the potential for the use of learning analytics in simulated authentic inquiry to provide a novel and valuable method of assessing inquiry practices and related epistemologies.

Session: LA Ethics

Presentation 2C1 (30 min): “An elephant in the learning analytics room – the obligation to act”
(Full Research Paper)
by Paul Prinsloo & Sharon Slade
Abstract:
There is a sufficient body of research and existing practice to convince key stakeholders within higher education of the potential of learning analytics to impact on student experiences in online and distributed learning environments. Much of the recent focus is around predictive modeling and uses of artificial intelligence to both identify learners at risk, and to personalize interventions to increase the chance of success. As higher education increasingly moves to online and digital learning spaces, we have access not only to greater volumes of student data, but also to increasingly fine-grained and nuanced data. Underlying much of learning analytics is a set of core assumptions. That knowing more about our students will, per se, offer a greater understanding of their learning journey. Further, that this enhanced knowledge and understanding provides the insight to do more – and then that knowing and understanding more leads to action – that we actually will act and do more. Each assumption is worthy of further reflection and investigation, but here we investigate issues around the latter – that knowledge necessarily leads to action, and more specifically that a greater understanding of what helps students to succeed, or perhaps helps them not to fail, leads to an obligation to act. In considering the obligation to act, this paper does not intend to suggest that higher education does not act to improve student success and retention. However, the obligation to act is tempered by a number of factors, including gaps in institutional research to shape institutional responsiveness, weak integration in institutional sense-making and in responding to students at risk, and the constraints imposed on the obligation to act by changing funding regimes. Increasingly higher education institutions allocate resources in areas that promise the greatest return. Choosing (not) to respond to the needs of specific student populations then raises questions regarding the scope and nature of the moral and legal obligation to act. In this paper we conceptually map the potential and obligation to act which flows from both higher education’s mandate to ensure effective and appropriate teaching and learning and its fiduciary duty to provide an enabling environment for students to achieve success. We examine how the collection and analysis of student data links to both the availability of resources and the will to act and also to the obligation to act. Further, we examine how that obligation unfolds in two open distance education providers from the perspective of a key set of stakeholders – those in immediate contact with students and their learning journeys – the tutors or adjunct faculty.
Presentation 2C2 (30 min): “Where is the evidence? Learning analytics: a call to action”
(Full Research Paper)
by Rebecca Ferguson & Doug Clow
Abstract:
Where is the evidence for learning analytics? In particular, where is the evidence that it improves learning in practice? Can we rely on it? Currently, there are vigorous debates about the quality of research evidence in medicine and psychology, with particular issues around statistical good practice, the ‘file drawer effect’, and ways in which incentives for stakeholders in the research process reward the quantity of research produced rather than the quality. In this paper, we present the Learning Analytics Community Exchange (LACE) project’s Evidence Hub, an effort to relate research evidence in learning analytics to four propositions about learning analytics: whether they support learning, support teaching, are deployed widely, and are used ethically. Surprisingly little evidence in this strong, specific sense was found, and very little was negative (7%, N=124), suggesting that learning analytics is not immune from the pressures in other areas. We explore the evidence in one particular area in detail (whether they improve teaching in the university sector), and set out some of the weaknesses of the evidence available. We conclude that there is considerable scope for improving the evidence base for learning analytics, and set out some suggestions for various stakeholders to achieve this.
Presentation 2C3 (20 min): “Student Perceptions of Their Privacy in Leaning Analytics Applications”
(Short Research Paper)
by Kimberly Arnold & Niall Sclater
Abstract:
Over the past five years, ethics and privacy around student data have become major topics of conversation in the learning analytics field. However, the majority of these have been theoretical in nature. The authors of this paper posit that more direct student engagement needs to be undertaken, and initial data from institutions beginning this process is shared.

Session: Understanding Student Behaviour – Multimodal Analytics

Presentation 2D1 (30 min): “Understanding Student Learning Trajectories Using Multimodal Learning Analytics within an Embodied-Interaction Learning Environment”
(Full Research Paper)
by Alejandro Andrade
Abstract:
The aim of this paper is to show how multimodal learning analytics (MMLA) can help understand how elementary students explore the concept of feedback loops while controlling an embodied simulation of a predator-prey ecosystem using hand movements as an interface with the computer simulation. We represent student motion patterns from fine-grained logs of hands and gaze data, and then map these observed motion patterns against levels of student performance to make inferences about how embodiment plays a role in the learning process. Results show five distinct motion sequences in students’ embodied interactions, and these motion patterns are statistically associated with initial and post-tutorial levels of students’ understanding of feedback loops. Analysis of student gaze also shows distinctive patterns as to how low- and high-performing students attended to information presented in the simulation. Using MMLA, we show how students’ explanations of feedback loops look differently according to cluster membership, which provides evidence that embodiment interacts with conceptual understanding.
Presentation 2D2 (30 min): “Put Your Thinking Cap On: Detecting Cognitive Load using EEG during Learning”
(Full Research Paper)
by Caitlin Mill, Igor Fridman, Walid Soussou, 
Disha Waghray, Andrew Olney & Sidney D’Mello
Abstract:
Current intelligent tutoring systems (ITSs) have no way to determine students’ mental effort: are they in deep thought, struggling to solve a problem, or are they zoned out completely because the material is way too easy for them? One way to answer this and similar questions may be through the use of EEG-based cognitive load detectors during learning. Despite its potential, EEG has not yet been utilized as a way to optimize instructional strategies during learning. We take an initial step towards this goal by assessing how experimentally manipulated (easy and difficult) sections of an intelligent tutoring system impacted an EEG-based estimation of students’ cognitive load. A main effect of task difficulty was found during two phases of an ITS, suggesting that instructional design can influence cognitive load. We found that performance was correlated with levels of predicted cognitive load. Finally, we provide evidence that EEG is a viable source of data, even across a 90-minute learning session.
Presentation 2D3 (20 min): “Analytics Meet Patient Manikins: Challenges in an Authentic Small-Group Healthcare Simulation Classroom”
(Short Research Paper)
by Roberto Martinez-Maldonado, Tamara Power, Carolyn Hayes, Adrian Abdipranoto, Tony Vo, Carmen Axisa & Simon Buckingham Shum
Abstract:
Healthcare simulations are hands-on learning experiences aimed at allowing students to practice essential skills that they may need when working with real patients in the clinical environment. Some clinical classrooms are equipped with simulated patient manikins that can respond to actions or that can be programmed to deteriorate over time. Students can perform assessments and interventions, and enhance their critical thinking and communication skills. There is an opportunity to exploit the students’ digital footprints that these simulators can pervasively capture to make key aspects of the learning process visible. The setting can be augmented with sensors to capture traces of group interaction. This multimodal data can be used to generate visualisations or feedback for students or teachers. This paper reports on an authentic classroom study using analytics to collect and integrate multimodal data of students’ interactions with the manikins and their peers in simulation scenarios. We report on the challenges encountered in deploying such analytics ‘in the wild’, using an analysis framework that considers the social, epistemic and physical dimensions of collocated collaborative activity.

Afternoon Sessions (03:00 PM – 04:20 PM)

Click to Expand

Session: Self-Regulated Learning

Presentation 3A1 (30 min): Learning Pulse: a machine learning approach for predicting performance in self-regulated learning using multimodal data”
(Full Research Paper)
by Daniele Di Mitri, Maren Scheffel, Hendrik Drachsler, Dirk Börner, Stefaan Ternier & Marcus Specht
Abstract:
Learning Pulse explores whether using a machine learning approach on multimodal data such as heart rate, step count, weather condition and learning activity can be used to predict learning performance in self-regulated learning settings. An experiment was carried out lasting eight weeks involving PhD students as participants, each of them wearing a Fitbit HR wristband and having their application on their computer recorded during their learning and working activities throughout the day. A software infrastructure for collecting multimodal learning experiences was implemented. As part of this infrastructure a Data Processing Application was developed to pre-process, analyse and generate predictions to provide feedback to the users about their learning performance. Data from different sources were stored using the xAPI standard into a cloud-based Learning Record Store. The participants of the experiment were asked to rate their learning experience through an Activity Rating Tool indicating their perceived level of productivity, stress, challenge and abilities. These self-reported performance indicators were used as markers to train a Linear Mixed Effect Model to generate learner-specific predictions of the learning performance.
Presentation 3A2 (20 min): “Transitioning self-regulated learning profiles in hypermedia-learning environments”
(Short Research Paper)
by Clarissa Lau, Jeanne Sinclair, Michelle Taub, Roger Azevedo & Eunice Eunhee Jang
Abstract:
Self-regulated learning (SRL) is a process that highly fluctuates as students actively deploy their metacognitive and cognitive processes during learning. In this paper, we apply an extension of latent profiling, latent transition analysis (LTA), which investigates the longitudinal development of students’ SRL latent class memberships over time. We will briefly review the theoretical foundations of SRL and discuss the value of using LTA to investigate this multidimensional concept. This study is based on college students (n = 75) learning about the human circulatory systems while using MetaTutor, an intelligent tutoring system that adaptively supports SRL and targets specific metacognitive SRL processes including judgment of learning (JOL) and content evaluation (CE). Preliminary results identify transitional probabilities of SRL profiles from four distinct events associated with the use of SRL.
Presentation 3A3 (20 min): “Expanding the Scope of Learning Analytics Data: Preliminary Findings on Attention and Self-Regulation using Wearable Technology” (Short Research Paper)
by Catherine Spann, James Schaeffer & George Siemens
Abstract:
The ability to pay attention and self-regulate is a fundamental skill required of learners of all ages. Learning analytics researchers have to date relied on data generated by a computing system (such as a learning management system, click stream or log data). The development of wearable computing through fitness trackers, watches, heart monitors, and clinical grade devices such as Empatica’s E4 now provides researchers with access to psychophysiological data as students interact with learning content or software systems. This level of data collection promises to provide valuable insight into affective and emotional experiences of individuals. Our study details the use of wearable technologies to assess the relationship between heart rate variability and the self-regulatory abilities of an individual. This is relevant for the field of learning analytics as methods become more complex and the assessment of learner performance becomes more nuanced and attentive to the affective factors that contribute to learner success.

Session: Reflective Writing

Presentation 3B1 (30 min): “Reflective Writing Analytics for Actionable Feedback”
(Full Research Paper)
by Andrew Gibson, Adam Aitken, Ágnes Sándor, Simon Buckingham Shum, Cherie Tsingos-Lucas, Simon Knight
Abstract:
Reflective writing can provide a powerful way for students to integrate professional experience and academic learning. However, writing reflectively requires high quality actionable feedback, which is time-consuming to provide at scale. This paper reports progress on the design, implementation, and validation of a Reflective Writing Analytics platform to provide actionable feedback within a tertiary authentic assessment context. The contributions are: (1) a new conceptual framework for reflective writing; (2) a computational approach to modelling reflective writing, deriving analytics, and providing feedback; (3) the pedagogical and user experience rationale for platform design decisions; and (4) a pilot in a student learning context, with preliminary data on educator and student acceptance, and the extent to which we can evidence that the software provided actionable feedback for reflective writing.
Presentation 3B2 (20 min): “Reflective Writing Analytics – Empirically Determined Keywords of Written Reflection”
(Short Research Paper)
by Thomas Daniel Ullmann
Abstract:
Despite their importance for educational practice, reflective writings are still manually analysed and assessed, posing a constraint on the use of this educational technique. Recently, research started to investigate automated approaches for analysing reflective writing. Foundational to many automated approaches is the knowledge of words that are important for the genre. This research presents keywords that are specific to several categories of a reflective writing model. These keywords have been derived from eight datasets, which contain several thousand instances using the log-likelihood method. Both performance measures, the accuracy and the Cohen’s κ, for these keywords were estimated with tenfold cross validation. The results reached an accuracy of 0.78 on average for all eight categories and a fair to good interrater reliability for most categories even though it did not make use of any sophisticated rule-based mechanisms or machine learning approaches. This research contributes to the development of automated reflective writing analytics that are based on data-driven empirical foundations.

Session: Understanding Student Behaviour – Engagement

Presentation 3C1 (30 min): “LMS Influence on Student Engagement”
(Practitioner Presentation)
by 
Alex Perrier & Steve Harding
Abstract:
In the fall term of 2016, Berklee Online transitioned its Learning Management System (LMS) from Moodle to a new, Canvas-based system with a proprietary user interface (UI). This transition gave our organization an opportunity to study the influence of LMS design on the student learning experience. One notable characteristic of the new LMS is its emphasis on social interaction between students. Student engagement is a strong proxy for course quality, student satisfaction, and learning outcomes. In this article, we focus on three main dimensions of student engagement through interactions i) with the content, ii) with the instructor, iii) and with other students. Finally, we analyze the impact of the new features and enhanced UI on student engagement.
Presentation 3C2 (20 min): “Predicting the decrease of engagement indicators in a MOOC”
(Short Research Paper)
by 
Miguel L. Bote-Lorenzo & Eduardo Gómez-Sánchez
Abstract:
Predicting the decrease of students’ engagement in MOOCs is key to trigger timely interventions aimed at avoiding the disengagement before it takes place. This paper proposes and evaluates an approach to build the necessary predictive models using data that becomes available during a course.
Presentation 3C3 (20 min): “Studying Engagement and Performance with Learning Technology in an African Classroom”
(Short Research Paper)
by 
Komminist Weldemariam, Juliet Mutahi, Andrew Kinai, Abdigani Diriye & Nelson Bore
Abstract:
In this paper, we study the engagement and performance of students in a classroom using a system called Cognitive Learning Companion (CLC). CLC is designed to keep track and securely store the relationship between student content interactions and learning progression. It also provides evidence-based engagement-oriented actionable insights to teachers by assessing information from a sensor-rich instrumented learning environment in order to infer a learner’s cognitive and affective states. Data captured from the instrumented environment is aggregated and analyzed to create interlinked insights helping teachers identify how students engage with learning content and view performance records on selected assignments. We conducted a one-month pilot with 27 learners in a primary school in Nairobi, Kenya during their math and science instructional periods. In particular, we present our primary analysis of content-level interactions and engagement at the individual student and classroom level.

Session: Learning Design

Presentation 3D1 (30 min): “Unravelling the dynamics of instructional practice: A longitudinal study on learning design and VLE activities”
(Full Research Paper)
by Quan Nguyen, Bart Rienties, & Lisette Toetenel
Abstract:
Learning analytics has the power to provide just-in-time support, especially when predictive analytics is married with the way teachers have designed their course, or so-called learning design. Although recently substantial progress has been made in aligning learning analytics with learning design, there is still a shortage of empirical evidence of how teachers actually design their courses, and how this influences how students learn. This study investigates how learning design is configured over time and its impact on student activities by analyzing longitudinal data of 38 modules with a total of 43,099 registered students over 30 weeks at the Open University UK, using social network analysis and panel data analysis. Our analysis unpacked dynamic configurations of learning design between modules over time, which allows teachers to reflect on their practice in order to anticipate problems and make informed interventions. Furthermore, by controlling for the heterogeneity between modules, our results indicated that learning designs are able to explain up to 60% of the variability in student online activities, which reinforced the importance of pedagogical context in learning analytics.
Presentation 3D2 (20 min): “A randomized controlled trial comparing three different ways of sequencing content: The role of choice”
(Short Research Paper)
by 
Seth A. Adjei, Anthony F. Botelho & Neil T. Heffernan
Abstract:
The effect of choice on student achievement and engagement has been an extensively researched area of learning analytics. Current research findings suggest a positive relationship between choice and many varied outcome measures, but very little has been reported to indicate whether these findings hold in intelligent tutoring systems. In this paper, we report the results of a randomized controlled experiment in which we investigate the effect of student choice on assignment completion and future achievement. The experiment design has three conditions to observe the effect of choice. In the first condition, students are given choice to decide on the order in which to complete assignments, while in the second condition, students are prescribed an intuitive order in which to complete assignments. Lastly, those in the third condition were prescribed a counter intuitive order in which to complete assignments. The results of this present study indicate that giving students a choice in the order in which to work on assignments results in higher completion rates and better achievement in posttests. A post-hoc analysis also shows that even with students with similar completion rates, the group given choice has a higher post-test score than any of the other conditions. The results seem to support the many theories of the positive effect of choice on student achievement.
Presentation 3D3 (20 min): “ATCE – An Analytics Tool to Trace the Creation and Evaluation of Inclusive and Accessible Open Educational Resources”
(Short Research Paper)
by 
Cecilia Avila, Silvia Baldiris, Ramon Fabregat & Sabine Graf
Abstract:
The creation of Inclusive and Accessible Open Educational Resources (IA-OERs) is a challenge for teachers because they have to invest time and effort to create accessible, inclusive, meaningful and high quality learning contents considering students’ needs and preferences. An IA-OER is characterized by its pedagogical intentions considering the Universal Design Learning (UDL) principles, the quality on its contents and the web accessibility as a way to guarantee the access for all students. Creating an IA-OER with these characteristics is not a straightforward task, especially when teachers do not have enough information/feedback to make decisions on how to improve the learning contents. In this paper we introduce ATCE – an Analytics Tool to trace the Creation and Evaluation of IA-OERs. This tool focuses in particular on the accessibility and quality of the IA-OERs. ATCE was developed as a module within the ATutor learning management system. An analytics dashboard with visualizations related to the teachers’ competences in the creation and evaluation of IA-OERs was included as part of the tool. This paper also presents a use case of the visualizations obtained from the creation and evaluation of one IA-OER after using our analytics tool.

 

Top