Title
Subscribe Today
Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Guiding and Motivating Students Through Open Social Student Modeling: Lessons Learned


by I-Han Hsiao & Peter Brusilovsky - 2017

Background/Context: A large number of educational resources are now made available on the web to support both regular classroom learning and online learning. The abundance of available content has produced at least two problems: how to help students find the most appropriate resources and how to engage them in using and benefiting from these resources. Personalized and social learning have been suggested as potential ways to address these problems. Our work attempts to integrate these directions of research by combining the ideas of adaptive navigation support and open student modeling with the ideas of social comparison and social visualization. We call our approach Open Social Student Modeling (OSSM).

Objective/Research Questions: First, we review a sequence of our earlier projects focused on Open Social Student Modeling for one kind of learning content and formulate several key design principles that contribute to the success of OSSM. Second, we present our exploration of OSSM in a more challenging context of modeling student progress for two kinds of learning content in parallel. We aim to answer the following research questions: How do we design OSSM interfaces to support many kinds of learning content in parallel? Will current identified design principles (key features) confirm the power of the learning community through OSSM with multiple learning-resource collections? Will the OSSM visualization provide successful personalized guidance within a richer collection of educational resources?

Research Design: We designed four classroom studies to assess the value of different options for OSSM visualization of one and multiple kinds of learning content in the context of programming-language learning. We examined the comparative success of different design options to distill successful design patterns and other important lessons for the future developers of OSSM for personalized and social e-learning.

Findings/Results: The results confirmed the motivational impact of personalized social guidance provided by the OSSM system in the target context. The interface encouraged students to explore more topics and motivated them to work ahead of the course schedule. Both strong and weak students worked with the appropriate levels of questions for their readiness, which yielded consistent performance across different levels of complex problems. Additionally, providing more realistic content collection on the navigation-supported OSSM visualizations resulted in uniform performance for the group.

Conclusions/Recommendation: A sequence of studies of several OSSM interfaces confirmed that a combination of adaptive navigational support, open student modeling, and social visualization in the form of the OSSM interface can reinforce the navigational and motivational values of these approaches. In several contexts, the OSSM interface demonstrated its ability to offer effective guidance in helping students to locate the most relevant content at the right time while increasing student motivation to work with diverse learning content.



INTRODUCTION


A large number of educational resources are now available on the web to support both regular classroom learning and online learning. This abundance of available content produces at least two problems: how to help students find the most appropriate resources, and how to engage them in using and benefiting from these resources. Personalized learning and social learning technologies, among others, have been explored to address these problems. Personalized learning focuses mostly on guiding learners to good learning resources to help every learner find the most relevant and useful content, given a learners current state of knowledge and interests (Kay, 2008). Social learning is known for its ability to increase the motivation of students to learn, among other positive impacts on the educational process (Barolli, Koyama, Nauerz, & Welsch, 2006; Méndez, Lorenzo, Acosta, Torres, & González, 2006; Vassileva & Sun, 2008). While each of these technologies has been explored in many research projects, very few attempts have been made to use these technologies in combination. We believe, however, that the integration of personalized and social learning technologies is a very promising research direction. These technologies have complementary strengths and could potentially reinforce each other when applied together. This paper reports the results of our exploration of one specific technology at the crossroads of personalized and social learning: Open Social Student Modeling.

Open Social Student Modeling (OSSM) integrates adaptive navigation support (Brusilovsky, 2007) and open student modeling (Bull & Kay, 2007), two prominent technologies in the field of personalized learning, with social visualization, a popular approach in the field of social learning (Vassileva, 2008). OSSM can be considered a social extension of open student modeling. Open student modeling has been suggested as a way to externalize student models, the key component of any personalized learning systems. While in a traditional personalized learning system this model is usually hidden from the student and only used by the personalization engine to provide adaptation effects (Figure 1, left), systems with an open student model expose this model to the learner and provide an interface for its exploration and possible editing (Figure 1, right). Open student modeling is known for a number of positive effects. It increases the transparency of personalization, helps raise the students awareness of their learning performances, and supports metacognitive processes (Bull & Kay, 2013). In combination with adaptive navigation support, it can also efficiently guide students to the appropriate content (Brusilovsky, Sosnovsky, & Shcherbinina, 2004). In this context, the idea of Open Social Student Modeling is simply to make the content of individual and student models accessible not only to the target student, but also to a broader group of studentsfor example, students in the same class. The most natural way to do this is through social visualization that can present the content of multiple student models to the target student in a form that enables comparison of the students own knowledge to the knowledge of peers and the class as a whole.

We have explored the idea of OSSM in a sequence of studies. While the OSSM idea itself is relatively straightforward, it took us several attempts to do it right (i.e., implement it in a form that delivers several benefits) in a simple context with one type of learning content. We went through a sequence of incrementally more powerful designs that also allowed us to learn some important lessons about OSSM design. Armed with the lessons learned, we approached a more challenging context and implemented OSSM visualization for two kinds of learning content in parallel.

This paper presents an account of our work on OSSM over the last several years. We start with a literature review of open user modeling, social visualization, and underlying theories such as self-regulated learning and social comparison. Following that, we briefly summarize a sequence of our studies with OSSM in a one-content-type context. These studies have been published before; we review them here to illustrate the problems of OSSM design and to present lessons learned from these studies. Next, we present in greater detail our more recent study that evaluated the OSSM interface for two types of content. At the end of the paper we summarize the results, discuss the limitations, and consider future work.

Figure 1. Traditional approach (left) vs. integration of students models into the interface (right)

[39_21773.htm_g/00002.jpg]

BACKGROUND

OPEN STUDENT MODELING

An open student (learner) model is a special kind of student model that allows the student to access and possibly modify the model content. In traditional personalized learning systems, student models are hidden under the hood and used only for the systems internal needs (i.e., to make the education process personalized) (Figure 1, left). The proponents of open student models (Figure 1, right) argue that the ability to view and modify their models could be beneficial for the students for a number of reasons. A typical open learner model displays the modeled state of student knowledge, although models displaying interests (Ahn, Brusilovsky, Grady, He, & Syn, 2007) or learning styles (Triantafillou, Pomportsis, Demetriadis, & Georgiadou, 2004) are also known. Open knowledge models can be presented in simple forms such as a skill meter, a partly shaded bar that shows learner progress as a subset of expert knowledge (Bull & Kay, 2007; Weber & Brusilovsky, 2001); the probability that a learner knows a concept (Corbett, Anderson, & OBrien, 1995); or a user's knowledge level compared to the combined knowledge of other groups of users (Linton, Joy, Schaefer, & Charron, 2000). Skill meters have been extended to show progress as a subset of material covered, which is, in turn, a subset of expert knowledge (Mitrovic & Martin, 2007); or a further extension also allowing the existence of misconceptions and size of topic to be included in the skill meter (Bull & Kay, 2007).

There are two main streams of work on open student models. One stream focuses on the interfaces visualizing the model to support students self-reflection and planning; the other one encourages students to participate in the modeling process by engaging them through, e.g., negotiation or collaboration on construction of the model.

Visual representations of the student model vary from high-level summaries (such as skill meters) to complex concept maps or Bayesian networks. Corbett et al. (1995) described the ACT Programming Tutor interface, which provides the learner with a skill meter showing the list of learning goals and the progress the learner has already made with respect to the goals. Mabbott and Bull (2004) elaborated on an interface providing students with four views of their learner models; these views visualize different aspects of the underlying domain knowledge model, namely the hierarchical structure of topics, the lecture structure, semantic relationships among the topics, and the recommended sequence for learning the topics. The STyLE-OLM interface proposed by Dimitrova (2003) allows students to browse and navigate through their learner models using the visual notation of concept graphs.

Dimitrova et al. (2001) explored interactive open learner modeling by engaging learners to negotiate with the system during the modeling process. Chen, Chou, Deng, and Chan (2007) investigated active open learner models in order to motivate learners to improve their academic performance. Both individual and group open learner models were studied and demonstrated an increase of reflection and helpful interactions among teammates. Bull and Kay (2007) described a framework for applying open user models in adaptive learning environments and provided many in-depth examples.

Studies have showed that students have a range of preferences for presentations of their own knowledge in the open student modeling systems. Students highly value the options of having multiple views and being able to select one that they are the most comfortable with. Such results are promising, potentially increasing students quality of reflection on their own knowledge (Mabbott & Bull, 2004). A range of benefits have been reported on opening the student models to the learners, such as increasing the learners awareness of developing knowledge difficulties and the learning process, as well as students engagement, motivation, and knowledge reflection (Bull, 2004; Mitrovic & Martin, 2007; Zapata-Rivera & Greer, 2000). In our own work on the QuizGuide system (Hsiao, Sosnovsky, & Brusilovsky, 2010), we embedded open learning models into adaptive link annotation and demonstrated that this arrangement can remarkably increase student motivation to work with non-mandatory educational content.

SOCIAL VISUALIZATION AND SOCIAL NAVIGATION SUPPORT IN E-LEARNING

Within a broader area of social learning, social navigation support and social visualizations are most directly related to the OSSM approach presented in this paper. Social navigation support captures a known social phenomenon by following the footprints of other people (Brusilovsky, Chavan, & Farzan, 2004; Dieberger, 1997; Dieberger, Dourish, Höök, Resnick, & Wexelblat, 2000; Wexelblat, 1999). The educational value of this approach has been confirmed in several studies (Brusilovsky, Sosnovsky, & Yudelson, 2009; Farzan & Brusilovsky, 2008; Kurhila, Miettinen, Nokelainen, & Tirri, 2006). Social visualization aims to represent or organize multiple students data in an informative way, for example by producing visual representations of student groups. Group visualizations have been used to support collaboration between learners in the same group and to foster competition in a group of learners (Vassileva & Sun, 2007). Vassileva and Sun (2007) investigated community visualization in online communities. Their results showed that social visualization allows peer recognition and provides students the opportunity to build trust in others and in the group. CourseVis (Mazza & Dimitrova, 2007) was one of the pioneer systems providing graphical visualization of multiple groups of users to teachers and learners. It helped instructors to identify some common problems in distance learning.

In our own work, we try to move beyond visual representations of learning analytics by moving from action visualization to knowledge visualization. We combined cognitive aspects of open student modeling with social and visual aspects of social visualization and social navigation support by allowing students to explore and interact with one anothers models as well as a cumulative model of the class. This idea was first explored by Bull and Britland (2007), who used OLMlets to research the problem of facilitating group collaboration and competition. Their results showed that optionally releasing the models to their peers increases discussion among students and encourages them to start working sooner. The Open Social Student Modeling approach presented in this paper moves these ideas further. A series of OSSM designs presented in the paper demonstrates several benefits that can be obtained by merging open student modeling, social visualization, and social navigation support.

THEORETICAL BACKGROUND: SELF-REGULATED LEARNING AND SOCIAL COMPARISON THEORY

The theoretical background for our work on open student modeling and social visualization is grounded in research on self-regulated learning and social comparison theory.

Research in self-regulated learning examines students metacognitive strategies for planning, monitoring, and modifying their management and control of their effort on classroom academic tasks (Pintrich & De Groot, 1990). Self-regulated learning involves self-monitoring to optimally interpret feedback from students academic learning (Zimmerman, 1990). Azevedo, Guthrie, and Seibert (2004) investigated how self-regulated learning helped students acquire conceptual understanding. Their results showed that students who gained higher conceptual understandings (high jumpers) tended to be good at regulating their learning by using effective strategies, planning their learning by creating sub-goals and activating prior knowledge, monitoring their emerging understanding, and planning their time and effort. On the other hand, students who gained lower conceptual understandings (low jumpers) tended to handle task difficulties and demands mainly by engaging in help-seeking behavior and did not spend much time monitoring their learning. Our work aims to leverage awareness, motivation, and content organization through social visualizations in the hope of promoting students self-regulated learning behavior.

Research in social comparison (Festinger, 1954) has demonstrated that people often determine appropriate behavior for themselves by examining the behavior of others, especially similar others (Buunk & Gibbons, 2007). Consequently, it has been shown that individuals tend to behave similarly to their friends and peers (Cialdini, Wosinska, Barrett, Butner, & Gornik-Durose, 1999). Researchers and designers of online systems have used insights from social comparison research in the study of online social behavior. In the educational domain, social comparison processes have been studied extensively (Darnon, Dompnier, Gilliéron, & Butera, 2010; Kaplan & Maehr, 2007) and their positive impact on student performance has been examined in several papers (Light, Littleton, Bale, Joiner, & Messer, 2000; Huguet, Dumas, Monteil, & Genestoux, 2001). In online education environments, social comparisons have been explored more recently (Vassileva, 2008), but no research to date has explored how social comparisonbased adaptive systems can influence learning. Furthermore, while ample evidence points to the role of key personal attributes such as personality and culture in learning, little is known about how they affect learning in the context of adaptive learning systems or in environments in which social comparison is embedded. A synthesis review of many social comparison studies concluded that upward comparisons in the classroom often lead to better performance (Dijkstra, Kuyper, Wer, Buunk, & Zee, 2008). Over fifty years of social comparison theory literature, most of the research has been done through qualitative studies using interviews, questionnaires, and observation. In our research, we develop a set of quantitative measures for applying social comparison theory in the target context.

OPEN SOCIAL STUDENT MODELING: STUDIES WITH ONE KIND OF LEARNING CONTENT

Our work on Open Social Student Modeling was motivated by the success of our two earlier projects, QuizGuide (Brusilovsky, Sosnovsky, et al., 2004) and Knowledge Sea II (Brusilovsky, Chavan, et al., 2004). The QuizGuide system applied open student modeling adaptive navigation support technology to provide personalized access to a collection of programming problems; the Knowledge Sea II system used social navigation support and map-based visualization to help students navigate to the most relevant weekly readings. QuizGuide demonstrated excellent ability to improve student performance and engage students to work with non-mandatory content. Yet its knowledge-based guidance was relatively hard to build; it required considerable knowledge engineering efforts to create a prerequisite-based network of topics. On the other hand, Knowledge Sea II used no knowledge engineering; its power to guide students to relevant reading was based on social wisdomprocessing traces of student work. OSSM was born as an attempt to replace QuizGuides original guidance approach, based on labor-intensive knowledge engineering, with the self-organized social navigation that worked so well in Knowledge Sea II. We expected that showing students the state of the target student knowledge model, topic by topic, alongside student models of peers will help them to avoid topics that might be too easy or too hard to explore and focus on the most appropriate subset of programming problems. We also hoped that the engaging power of the open student model and the social comparisons provided by OSSM would work together in motivating students to do more work with non-mandatory learning content.

As it appears, the main challenge of our work on OSSM was to find the right visualization approach that could present students own knowledge alongside peer knowledge in a form that promotes guidance and comparison. We had to go through three design iterations to build an OSSM interface able to fulfill this promise. For each design iteration, we conducted at least one semester-long classroom study to evaluate the impact of OSSM on system usage and learning. Since the original idea of OSSM was to replace QuizGuides knowledge-based guidance, all interfaces were designed to provide access to the same set of programming problems. As a result, we were able to use a Java version of QuizGuide (JavaGuide) as a baseline. Analysis of students logs of their subjective opinions provided through questionnaires helped us to identify problems and to learn important lessons, and that analysis informed the next design iteration. The following subsections describe the design rationales, setup, and findings of three progressively improved OSSM interfaces.

EXPLORING OPEN SOCIAL STUDENT MODELING WITH QUIZMAP: ADAPTIVE NAVIGATION SUPPORT OF PARAMETERIZED QUESTIONS WITH TREEMAP

Our first attempt to design the OSSM interface was relatively close to the cell-based social navigation implementation in Knowledge Sea II. It used a space-filling treemap visualization approach (Shneiderman, 2004) and was promptly called QuizMap. The treemap approach was selected for its ability to efficiently represent hierarchical information. The map was built to represent three levels of hierarchy: (a) domain topics, (b) problems within each topic, and (c) the performance of individual students on each problem. Individual student performance for each problem was shown on the bottom level of the treemap as a rectangle; the size of each rectangle (tile) represented the amount of work done by a specific student, while the color indicated the amount of knowledge gained, increasing with each successful answer. A students own performance and peers performance were presented in contrasting colors (orange and blue respectively). The exact values of problem attempts and knowledge gain were available as mouseovers by hovering over each tile. Based on the contrast of sizes and colors, a student could estimate his or her current knowledge, relative standing in the group, and relative amount of effort needed to catch up to more capable peers. The map also allowed students to distinguish between new, hard topics (little classwork, little success) and older, relatively learned topics (a lot of work and success). The QuizMap interface is presented in Figure 2. Note that to support exploration of a new topic that had very little activity and small cells (upper right corner), it was possible to zoom in on a specific topic or question.

Figure 2. QuizMap interface

[39_21773.htm_g/00003.jpg]

An important design decision inherited from QuizGuide was the use of the map not just to visualize knowledge, but also to provide direct access to learning content. Following a mouse click in the area of a specific question, QuizMap displayed the question in a pop-up window. The questions were provided by the QuizJET system (Hsiao et al., 2010) and were the same as in the earlier QuizGuide. Each question asked the student to predict the results of execution of a specific Java program (i.e., mentally execute the program and enter the final value of some variable of the text to be printed by the program). All questions were parameterized, i.e., included a random parameter, that the system instantiated when the question was delivered to a student. As a result, the student could attempt to answer the same question multiple times with different values of the parameter, helping to achieve mastery. The implementation and functionalities of parameterized self-assessment quizzes were described in detail in Hsiao et al. (2010).

Figure 3, left and right, shows two different zoom-in scenarios of students views. Figure 3 left demonstrates that student A had consistent performance in terms of attempts and success rate across all questions (jObject1jObject5) in topic Object. The grid size and color density shade of each cell are relatively similar. On the other hand, Figure 3 right shows that student B focused on working on certain questions at the expense of others. Considerable effort was spent on question jObject4, which shows a relatively high number of attempts. Student B also did a considerable amount of work on questions jObject2 and jObject3 but achieved a lower success rate (shown by the lighter orange color). With QuizMap, students were expected to identify their own strengths and weaknesses as well as those of their peers and to estimate their readiness for each available question.

Figure 3. Student A (left); student B (right)

[39_21773.htm_g/00004.jpg][39_21773.htm_g/00005.jpg]

We conducted a semester-long classroom study with QuizMap. Using log data of students interactions with OSSM, we compared the usage of self-assessment questions in a non-adaptive course portal without social visualization, with QuizGuide, and with QuizMap (Table 1). To our surprise, we found that students who worked with QuizMap made fewer attempts and explored fewer topics than in our earlier systems, which indicated a decrease of engaging power. Yet they achieved a much higher success rate that indicated the impact of the social guidance. We also discovered how the social guidance mechanism does its work: stronger students began to work on problems in QuizMap and left darker and larger tile visual traces among topics for less capable students to follow (Brusilovsky, Hsiao, & Folajimi, 2011). However, the decreased engagement was a clear sign of problems and the students feedback helped us to reveal some of these problems. While students expressed high satisfaction toward visualizing personal and peer performances as well as direct access to content, they felt frustrated with the crowded and cluttered interface. Indeed, as shown on Figure 2 upper right corner, questions from later topics were less likely to be attempted, making the tile sizes very small and thus the whole corner cluttered. Students were also confused by the lack of clear topic progression (Treemap allocates topics by side, not in order) and the problems with comparing personal and peer knowledge (orange and blue colors were hard to compare). These results motivated us to the next study.

ENHANCING CONTENT ORGANIZATION VIA PARALLEL INTROSPECTIVEVIEWS: VISUALIZING PEER MODELS THROUGH THE OSSM INTERFACE

Following QuizMaps experience, we realized that we needed a different representation in order to have a clear picture of content and each individual student model. We selected an appealing IntrospectiveViews interface, which was originally used for visualization of user interest models (Bakalov, König-Ries, Nauerz, & Welsch, 2010) and adapted it to the needs of representing knowledge and social comparison. We called this OSSM interface Parallel IntrospectiveViews (Figure 4) since it offered parallel views of two student models at a time, i.e., students own models vs. models of their peers or the class as a whole.

Figure 4. Parallel IntrospectiveViews: left pane, visualization of the students own progress; right pane, visualization of a peers progress. The circular sectors represent lectures, and the annular sectors represent the topics of individual lectures. The shades of the sectors indicate whether the topic has been covered and, for those that have been covered, the progress the student has made. The box at lower right displays socially annotated quizzes for the selected topic. Color screenshots available at http://www.minerva-portals.de/research/introspective-views/

[39_21773.htm_g/00006.jpg]

Figure 4 shows the Parallel IntrospectiveViews interface. The visualization consists of two panes: the left pane displays the students own progress, and the right one displays the progress of any class peer or the whole class, whichever is selected from a dropdown menu. Each pane visualizes the respective students progress as a pie chart. The pie chart representation visually conveys the chronological order of lectures, while the size of a sector represents the number of problems for each lecture. A lecture may consist of one or several topics, which are represented as angular segments placed within the circular sector of the corresponding lecture. This representation allows the student to easily estimate the amount of work in each individual topic or lecture, while an apparent topical sequence provides a good picture of progress through the course. In addition, the ability to view someone elses progress allows the student to quickly find the peers who can help with a difficult topic or quiz. For example, if a student experiences difficulties in completing some quizzes, he or she can use the parallel views to find a classmate who has already successfully completed those quizzes to ask for help. Finally, the ability to view the average progress of the entire class allows the student to relate her progress to that of the whole class and estimate whether she is ahead or behind of the class.

We conducted a classroom study to assess the impact of the Parallel IntrospectiveViews OSSM interface on student learning and engagement (again, comparing it against two baselinesthe non-adaptive course portal QuizJET and the adaptive JavaGuide). A summary of results is shown in Table 1. We found a slight increase in student activity in Parallel IntrospectiveViews in comparison with non-adaptive QuizJET, yet not as large as in JavaGuide. We attribute this to the systems strong orientation toward comparing personal performance with the class average (which was a default comparison). While access to social data apparently encouraged less active users to do more work, it could equally discourage stronger students from running too much ahead of the class. As a result, the difference between the most active and least active users grew smaller. Evidence that this was really happening is the observed 25% decrease in standard deviations for the number of attempts. In turn, the class as a whole became a bit less adventurous than students using non-social JavaGuide, exploring fewer questions and topics; this was because the variety of topics comes to some extent from more active users who run ahead of the class. On the other hand, the increase in the success rate demonstrates that knowledge-based and social guidance combined are more effective in guiding students to appropriate questions that they are ready to handle than knowledge-based guidance alone. Community wisdom does matter.

Parallel IntrospectiveViews followed our earlier systems in providing direct access to learning content through OSSM visualization. Clicking on a topic segment, a student could call up a list of problems within the topic that was also socially annotated (Figure 4, right). Moreover, this click could be done not only on the students own knowledge map (left pane), but also on the class/peer knowledge map (right pane). Students logs reported from which part of the interface content access was made. These logs indicated that students quite actively accessed learning content from their peers models. This was a strong piece of evidence that students really used the social guidance provided by the interface. Moreover, we found a correlation between the frequency of peer model comparisons and learning gain. The more the students used social guidance by accessing content on the peer side, the higher post-quiz scores they received (r = 0.34, p = 0.004). Finally, in the subjective evaluation, students expressed positive gratitude toward the system (Hsiao, Bakalov, Brusilovsky, & König-Ries, 2011). Overall, the results demonstrated a promising impact of social visualization on students motivation and learning. We were inspired by the outcome and decided to investigate this visualization approach further while enhancing the peer comparison aspect that was apparently efficient.

PROGRESSOR: REFINED OSSM INTERFACE FOR PERSONALIZED ACCESS TO PROGRAMMING PROBLEMS

The enhanced version of Parallel IntrospectiveViews was named Progressor (Hsiao & Brusilovsky, 2012). It followed the holistic and easy-to-grasp view of knowledge progress used in the earlier systems while improving access to peer progress data. To achieve the latter goal, we implemented a sortable list of thumbnail previews of student peers models that replaced the earlier blind menu. According to the small multiples principle (Tufte, 1990), the thumbnail peer models of the same shapes provide the visual constancy and allow focusing on the differences. To increase the power of social comparison, the visible part of the sorted list displayed the top three performers within the class. We believed that displaying the progress of top students could make the rest of the class more eager to catch up with them than the default comparison with a moderate class average in the earlier system. As before, the user can obtain a detailed view of the progress of any peer by clicking his or her thumbnail and switching to one-to-one comparison mode (Figure 5, right). In this mode, the user can obtain detailed information about the peers progress, including information about progress on individual quizzes. To balance easy access to peer information, Progressor implemented privacy management. The user can grant and revoke access to his or her progress data for each peer individually. Ultimately, the OSSM design was aligned with the information-seeking mantraoverview first, zoom and filter, details on demand (Shneiderman, 1996).

Figure 5. Progressor: Peers progress is displayed as thumbnails and listed at the side of the users own model (left); Peers model comparison (right)



[39_21773.htm_g/00008.jpg]

From a semester-long study cross-compared with previous attempts to organize access to Java problems, we learned that the new design of the OSSM interface was very engaging. Students used Progressor extensively. On average, it achieved the highest system usage of all OSSM interfaces, surpassing even the former champion, JavaGuide (Table 1). Progressor also engaged students to explore more topics and to work on more distinct questions. In addition, the amount of time that students spent on the system (in terms of sessions) doubled. However, can we really argue that the boost of usage could be credited to the new design? To answer this question, we examined student interaction with the peer side of the Progressor interface, such as re-sorting, scrolling, and accessing the peer list. As before, we found that students interacted with the peer side considerably, comparing their progress with the progress of peers and accessing a considerable volume of content from the peer side. Moreover, the more students engaged in interacting with the social features of Progressor, the more likely they were to achieve a higher success rate in answering the self-assessment questions. These findings were consistent with the subjective evaluation outcome, which demonstrated high satisfaction with Progressor.

Table 1. OSSM Studies Statistics Summary

 

Baseline

 

OSSM

QuizJET

JavaGuide

 

QuizMap

PIV

Progressor

Attempts

80.81 ± 22.06

125.50 ± 20.04

 

45.55 ± 6.67

113.05 ± 15.17

205.73 ± 40.46

Success rate (%)

42.63 ± 1.99

58.31 ± 7.92

 

79.30 ± 1.9

71.35 ± 3.39

68.39 ± 4.32

Distinct topics

7.81 ± 1.64

11.77 ± 1.19

 

4.55 ± 0.59

9.06 ± 1.39

11.47 ± 1.34

Distinct questions

33.37 ± 6.50

46.18 ± 5.15

 

17.07 ± 2.78

36.5 ± 5.69

52.7 ± 6.92


OSSM DESIGN: LESSONS LEARNED

Through three progressions of OSSM interface design and studies, we learned that there are several important features for designing a successful educational social visualization system.

"

Interactivity and content access: Interactivity is very important for OSSM, as indicated by the large volume of student interaction with it and the impact of this interaction on performance. Interactivity can be implemented in several forms. In our OSSM interfaces, interactivity is implemented by allowing students to access learning content directly by clicking on the knowledge maps. The idea is simple and effective: The visualization of the user model is not just a widget, but also the main entrance to the learning content. Moreover, students are not only capable of interacting with the content, but they are also enabled to interact with their peers by communicating (sending requests), comparing, and sorting. Other interactivity features are, for example, zooming to allow the user model visualization to deal with complexity and large topic domains (Shneiderman, 1996), and manipulation to allow users to feel in control of their models (Kay, 1997).

"

Sequence: Progressor uses a familiar sequence of lectures and topics (shown clockwise), instead of the random topic allocation used on QuizMap; we believe that this was important to its success. The presented sequence helped students to interpret their progress in the context of their class and provided guidance, yet it did not restrict content access: Students are allowed to explore ahead or redo already covered topics. Through the series of OSSM studies, we have found that strong students tend to explore ahead of the class and weak students tend to follow them, even in topics that are beyond the current topic. We think that the ability of the interface to encourage strong students to move ahead of the class is important because it fuels the social navigation mechanism.

"

Identity: Identity representation captures all the information belonging to the student and displays it in a clear form. It makes students identify themselves with the model and allows them to easily compare themselves with one another (Bull, 2007; Chen et al., 2007). We complement these ideas with the concept of unity, proposing that perception of identity is higher if the model represents unity. We believe that Progressor, with its clear, concentrated representation of students own knowledge, better meets these characteristics than the failed QuizMap, which presented this model in a fragmented manner.

"

Easy comparison: Letting students compare their knowledge and progress with one another is the key to encouraging more work and better performance (Dijkstra et al., 2008). Progressor went further than earlier systems in this direction by allowing students to compare themselves with others on two levels, macro- and micro-comparisons. While viewing their own models, they can see thumbnails of their peers models. This is the macro-level comparison. It provides high-level comparisons, allowing fast mental overlapping of the colored areas between models. The idea aligns with Tuftes (1990) small multiples principle by providing regularity for drawing attention to data changes among the peer models. When the student clicks inside peer models, Progressor enters micro-level comparison mode, showing the users and peers models in full size, with greater detail. Both levels of comparison allow students to perform social comparisons at will.

"

Transparency: Comparison implies model exposure, which directly raises privacy issues. From our survey, there was no single extremely negative opinion regarding privacy and data-sharing. Students loved the idea of sharing and comparing. We also found that students actually had more persistent interactions with the system when they opted for model visibility. While privacy is always an important concern in social systems, our study results provided us some insight that the openness of the personal model may be viewed positively by students in our target context.

"

Guidance: Open social student modeling interfaces enable implicit guidance, by allowing students to compare themselves with their peers, and explicit guidance, by showing the best students at the top of the list of thumbnail models. In fact, our studies showed that students tended to follow the footprints of the most successful students in the class. This suggests several future research opportunities, such as recommender services.

OPEN SOCIAL STUDENT MODELING WITH TWO KINDS OF LEARNING CONTENT

Our experience with the three OSSM systems presented above allowed us to examine the feasibility and impact of a combined social visualization and open student modeling approach. We also learned several important lessons about how to organize the OSSM interface to maximize its potential impact. Yet our earlier systems had one important limitation: They used OSSM to provide personalized access to just one kind of learning contentparameterized programming questions for Java. This cant be considered a realistic case in the majority of domains. Specifically, in programming-language learning, one usually learns from multiple kinds of activities and types of learning content, e.g., reading textbooks, exploring program examples, watching videos, and writing programs. In addition, more and more kinds of learning resources have been made available online.

These considerations defined our next challenge. So far, we had tested OSSM by using one of the representative content collections and summarized a set of important attributes to design effective OSSM. How do we design OSSM interfaces to support many kinds of learning content in parallel? Will current identified design principles (key features) confirm the power of the learning community through OSSM with multiple learning resource collections? Will OSSM visualization provide successful personalized guidance within a richer collection of educational resources? A more recent study, presented in the second part of this paper, attempted to answer these questions. In other words, it addressed the OSSM scalability issue, focused on establishing a multi-content OSSM design, and explored its impact on students engagement and learning. In the next two sections we report our attempts to design a multi-content OSSM interface, Progressor+, following the principles distilled in our earlier work. We also present the results of a Progressor+ evaluation.

SYSTEM DESIGN: PROGRESSOR+, AN ADVANCED OSSM INTERFACE FOR MULTIPLE CONTENT COLLECTIONS

The goal of Progressor+ was to bring our earlier findings up to scale and explore the feasibility of OSSM in the context of more diverse learning content. To achieve this goal, we designed a new scalable tabular interface to accommodate diverse content. Figure 6 shows a conceptual diagram of the progression of OSSM interfaces and content coverage.

Figure 6. Progression and content coverage of OSSM interfaces

[39_21773.htm_g/00010.jpg]

We transferred all the features from the three earlier systems into the new design of Progressor+. The interface is presented in Figure 7. Each students model is represented as several rows of a large table, with each row corresponding to one kind of learning content and each column corresponding to a course topic. We incorporated two sizable pools of learning content in Progressor+: parameterized self-assessment questions and annotated code examples (thus Figure 7 shows two rows for each studenta quiz progress row and an example progress row). However, the tabular nature of the Progressor+ interface allows more kinds of content to be added when necessary. Each cell is color-coded to show the students progress in the topic. We used a ten-color scheme to represent percentiles of progress (Figure 7). The use of color-coding permits collapsing table rows that are out of focus, thus making it possible to present a progress picture of a large class in a relatively small space. This feature was inspired by the TableLens visualization, which is known as highly expressive and scalable (Rao & Card, 1994).

Figure 7. Progressor+: tabular OSSM visualization interfaces

[39_21773.htm_g/00012.jpg]



[39_21773.htm_g/00014.jpg]

Figure 8. Ten-color representation of summarized progress analytics

[39_21773.htm_g/00016.jpg]

Essentially, all the rows are joined together and are presented in a single large table. In other words, all the student models are combined in the same table. There are several other table layout options available for students, including a collapsed view, an expanded view, and a filtered view. The collapsed and expanded views are used to focus on the target student model or the specific type of content. Students are able to manipulate the views for model comparisons or detail inspections. The filtered view requires a criterion selection to refine the exploration view. The filtering criteria include sorting progress by content type or by success rate. The default setting of Progressor+ is configured as fully expanded table rows of the whole community, sorted by average progress in descending order. To access the content, students interact directly with the Progressor+ table cells by clicking on the intersection of the topic and the content type (Figure 7, right). Once this is selected, a panel of lists of content is presented, along with usage details for each content item. For instance: How many attempts have there been at the question? How many times has the question been successfully solved? How many lines of annotations have been studied?

STUDY DESIGN AND PROCEDURE

To achieve the objectives of this work, we designed a semester-long classroom study by providing the system as one of the supplemental course tools for the class. A semester-long classroom implementation allowed us to obtain a realistic longer-term case of the technology compared to the regular two-hour lab study. It also captured a more realistic scenario of the curriculum on all ranges of course topics. More importantly, it allowed us to measure long-term student engagement. To validate our hypotheses, we compared our study to three other classroom studies. All three other classroom studies featured the same classes, kinds of students, course materials (including textbooks, slides, assignments, and exams), course schedule, pre- and post-tests, and set of self-assessment questions and annotated examples.

The classroom studies were carried out in the undergraduate course Fundamentals of Object-Oriented Programming offered at the School of Information Sciences at the University of Pittsburgh. This is a required course for Information Sciences majors. The students registered for this course were commonly a mixture of students majoring in Information Sciences and undeclared students from the School of Arts and Sciences. Only a few students from other sciences or engineering-related degree programs registered for this course. QuizJET was introduced in the Spring 2008 semester, JavaGuide was introduced in the Fall 2008 semester, Progressor was introduced in the Spring 2011 semester, and Progressor+ was introduced in the Spring 2012 semester.

The Progressor-conditioned semester is considered the primary baseline group because the instructor was the same as the one in the Progressor+ case. Because the QuizJET and JavaGuide semesters were taught by different teachers, they are considered secondary baselines. It is essential to point out that the systems were used as nonmandatory tools for the course. In this work, we consider the groups of students who used the systems as the samples of volunteer subjects. Table 2 shows the composition of the conditions and participants of all the classroom studies, including the number of students, male and female composition, weak and strong distribution, and average scores on pretests.

Table 2. Study Conditions and Participants


 

Secondary baselines

 

Primary baseline

Experiment

Semester

Spring 2008

Fall 2008

 

Spring 2011

Spring 2012

System

QuizJET

JavaGuide

 

Progressor

Progressor+

Content

Quizzesa

Quizzesa

 

Quizzesa

Quizzes, examples

Number of students

   Overall

31

38

 

51

56

   Working with system

16 (52%)

22 (58%)

 

30 (59%)

38 (68%)

Male/female student distribution

   Overall

25/6

27/11

 

36/15

44/12

   Working with system

13/3

16/6

 

23/7

32/9

Weak/strong student distribution

   Overall

16/15

30/8

 

41/10

49/7

   Working with system

6/9

14/5b

 

 26/4

34/4

Average scores on pretest

   Overall

10.18c

4.97

 

3.53

3.20

   Working with system

10.20

2.68

 

3.67

3.05

IS majors/others (undeclared, mechanical engineering, biomedical informatics)

   Overall

25/6

21/17

 

23/28

23/33

   Working with system

12/4

10/12

 

8/22

17/21


aExamples were also available to the class through a traditional course management portal instead of having navigational support through the social visualization interface. bThree students working with the system in the Fall 2008 semester did not take the pretest. cThat the students who used QuizJET had significantly higher pretest scores could be attributed to three factors: Stronger students used the system that term; there were more Information Sciences majors using the system; and there were more repeaters from the previous semester, who had been given the pretests once before..

All four classes were given the same pretest during the first week to collect their pre-knowledge of the course. The systems were introduced to the classes in the third week of each semester and were available to the students from then on, for an overall fifteen-week time period. During the fifteen weeks, students voluntarily logged onto the systems and worked on the QuizJET exercises or/and the WebEx examples. Students were instructed on how to use the systems and advised to use them, but such use was not mandatory for the coursework. The post-tests were administered in the 16th week of the classes to measure students learning. A questionnaire survey was given shortly after the post-tests. There were four exams, including the final exam, across each semester; they were the important evaluation time marks and scheduled at the 5th, 9th, 15th, and 17th weeks of the semester respectively. To ensure that the student cohorts were comparable, we first examined the students pretest scores. A one-way between-subjects analysis of variance was performed on the pretest scores as a function of four different interfaces (QuizJET, JavaGuide, Progressor, and Progressor+). We found that students who used the QuizJET system had significantly higher pre-knowledge (M = 10.20, SE = 0.048) than the average of those using the other three systems (M = 3.13, SE = 0.048), F(3, 99) = 3.258, p = 0.0253. The assumption of homogeneity of variance was met, Brown-Forsythe F(3, 99) = 2.750, p = 0.052. The assumption of normality was only met for the QuizJET group (Table 3).

Table 3. Test of Normality of Pretest Scores for Each System

System

Shapiro-Wilk W

df

p

QuizJET

.923

16

.186

JavaGuide

.816

22

.002

Progressor

.838

30

.000

Progressor+

.897

38

.002


Table 4. Summary of all Parameter Statistics of Self-Assessment Quizzes Collection

 

Parameters

QuizJET

JavaGuide

Progressor

Progressor+

 

Active users

16

22

30

38

Average

Attempts

Success (%)

Session

80.81 ± 22.06

6

125.5 ± 25.66

6

205.73 ± 40.46

46

190.42 ± 21.20

20

Success rate (%)

42.63 ± 1.99

99%

58.31 ± 2.74

74%

68.39 ± 4.32

32%

71.20 ± 4.49

49%

Sessions

3.75 ± 0.53

4.14 ± 0.65

8.4 ± 1.39

5.18 ± 0.55

Coverage

Distinct topics

Distinct questions

7.81 ± 1.64

11.77 ± 1.07

11.47 ± 1.34

12.92 ± 0.90

Distinct questions

33.37 ± 6.50

46.18 ± 6.11

52.70 ± 6.92

61.84 ± 4.49


Table 5. Summary of all Parameter Statistics of Annotated Examples Collection

 

Parameters

QuizJET

JavaGuide

Progressor

Progressor+

 

Active users

21

20

7

35


Average

Examples

10.86

19.75

28.71

27.37

Lines

104.24

116.6

219.71

184.18

Sessions

4.42

5.35

5.50

4.94



Coverage

Distinct topics

8.48

9.15

12.28

12.20

Distinct examples

10.86

17.3

25.125

27.37

Distinct lines

80.33

67.1

115.22

141.5


We summarize the differences in conditions and the main direction of the effects of this work that we anticipated discovering for both collections of content (Figure 9). In Table 4 and Table 5, we present all parameters average statistics for both content collections in all of the conditions. The table will be broken down and dissected in detail in the following subsections: (a) impact on motivation and engagement, (b) impact on students learning, (c) social mechanism, and (d) subjective evaluation.

Figure 9.  Expected effects of the conditions

[39_21773.htm_g/00018.jpg]


OUTCOME VARIABLES

We follow our prior work in examining OSSM interfaces and students learning effects; we define the variables and measurements in Table 6.

Table 6. Definitions of Parameters

Parameter

Definition

Questions

Number of questions that a student attempted to solve

Success rate

Number of questions correctly answered, divided by all attempts

Examples

Number of examples that a student explored

Lines

Number of lines that a student explored

Sessions

Number of logins that a student visited the system

Exploration rate

Number of lines explored, divided by all explored example lines

Topic coverage

Distinct number of topics viewed

Question coverage

Distinct number of questions attempted

Example coverage

Distinct number of examples explored

 Line coverage

Distinct number of lines explored


Using these variables and other data, we have developed several ways to measure the expected outcome. Outcome measurements are discussed below.

Motivation and Engagement

In investigating students motivation and engagement, we hypothesized that students are motivated and engaged in using Progressor+ and produce more interactions and higher coverage. Specifically, we expected attempts, time, and diversity of content explored to increase.

First, we summarize the systems usage to gauge students motivation and engagement. The independent variables include question attempts, explored examples, explored example lines, course coverage (distinct topics, distinct questions, and distinct examples), and time spent on interacting with the systems.

Second, following topic-based personalization guidance, students are expected to focus on the current topics (Zone Alecture stream zonein Figure 10) (Brusilovsky et al., 2009). In Figure 10, the shaded areas in Zones C and D are the regions of off-current course topic activities, which are self-motivated activities performed by the students. Thus, we measure the ratio of students activity performed outside the current course focus to topic coverage that a student roams and works with in the system. The computational notation is presented in Equation 1, where m denotes motivation and i stands for each student. We called this indicator the M-ratio. To better understand the depth of intensity of students motivation, this ratio can be further divided into two statistics, forward rm and backward rm, where forward rm represents moving ahead of the current course focus, and backward rm  represents revisiting past topics. Both statistics explain the students self-motivation to work on content through the systems. The canonical formula is presented in Equation 2. For the M-ratio, we used the number of actions in Zones C and D divided by the total number of actions. To calculate forward rm, we used Zones A and D; we used Zones A and C to calculate backward rm.

[39_21773.htm_g/00019.jpg]


Figure 10. Projected self-motivated activities

[39_21773.htm_g/00020.jpg]


Learning

In investigating students learning results, we hypothesized that students would benefit from Progressor+ and achieve higher absolute knowledge gain. Meanwhile, we expected that multiple collections of content would result in the highest normalized knowledge gain. Therefore, we used pretest and post-test scores to measure the students knowledge gain. The canonical formula of a students absolute knowledge gain defines it as the difference between pretest and post-test scores (Equation 3). Normalized knowledge gain can be computed using Equation 4.

[39_21773.htm_g/00021.jpg]

EVALUATION RESULTS

IMPACT ON MOTIVATION AND ENGAGEMENT

If we can demonstrate that there is no significant difference in the amount of work done between Progressor and Progressor+ and that both show significantly higher amounts than the nonadaptive system, QuizJet, we will conclude that Progressor+ will work in a scalable content framework. We performed a one-way, between-subjects analysis of variance on the quantity of the work done as a function of system conditions. Table 7 summarizes the test results of two collections of work for three conditions. As we anticipated, we did not find significant differences in the amount of work done between Progressor and Progressor+. Progressor+ worked as well as Progressor. To prove that adaptive navigation support combined with the social visualization approach will work for a mixed collection of educational content, we have to show that this approach supports more educational activities than the nonadaptive system. The statistical analysis showed that, indeed, with both Progressor and Progressor+ students completed significantly higher amounts of work (questions, examples, and lines) than with QuizJET, which verified that our approach motivated the students to put more effort into working with the systems.

Table 7. Comparisons of Amount of Work Done with Each System

   

F-stats

p-value



Questions

QuizJET (M = 80.81, SE = 27.13)

vs.

Progressor (M = 205.73, SE = 27.13)


F(1, 44) = 24.20


<0.001

QuizJET (M = 80.81, SE = 27.13)

vs.

Progressor+ (M = 190.42, SE = 27.13)


F(1, 52) = 23.72


<0.001



Examples

QuizJET (M = 10.86, SE = 4.22)

vs.

Progressor (M = 28.71, SE = 4.22)


F(1, 26) = 12.13


<0.001

QuizJET (M = 10.86, SE = 4.22)

vs.

Progressor+ (M = 27.37, SE = 4.22)


F(1, 54) = 11.89


<0.001



Lines

QuizJET (M = 104.24, SE = 21.32)

vs.

Progressor (M = 219.71, SE = 21.32)


F(1, 26) = 9.55


<0.001

QuizJET (M = 104.24, SE = 21.32)

vs.

Progressor+ (M = 184.18, SE = 21.32)


F(1, 54) = 7.11


0.007


Motivating students to do more work should lead to better performance. To confirm that the motivational effects of the adaptive navigational support and social visualization actually led to more positive outcomes, we examined the course coverage parameters and performed correlation analyses. To begin with, we found that students worked significantly more distinct questions and studied more examples and lines in the combined approach systems than with no support at all. The Pearson correlation coefficient indicated that the more diverse the questions the students tried, the higher the success rate they attained (r = 0.707, p < 0.01), and that the more diverse the examples the students studied, the higher the success rate they attained (r = 0.538, p < 0.01). We also looked at how frequently students repeated the questions, examples, and lines. We found that the more the students repeated the same questions and the more they repeated studying the same lines, the higher the success rate they attained (r = 0.654, p < 0.01; r = 0.528, p < 0.01). The analysis of motivational effects presented in this section suggests that the combined approach can effectively enhance students motivation in the targeted learning context. In the next section, we continue analyzing the effects of such an approach on students engagement.

In our pre-studies, discussed earlier, we found that students doubled the time spent (in terms of sessions) in Progressor compared to QuizJET. However, we did not find the same pattern when comparing Progressor+ with QuizJET. Nevertheless, the intensity of students work per session was actually higher in Progressor+. This suggested that students might be spending more time in Progressor+ than in QuizJET, but in fewer sessions. Therefore, we computed the actual average time spent for each content collection (Table 8). The results showed that students spent fewer sessions in Progressor+ on quizzes; however, they did work longer per session. On average, they spent 3.72 and 4.94 times more minutes in Progressor and Progressor+, respectively, than in QuizJET. There were no significant differences between Progressor and Progressor+ in total time spent on the quizzes. From the example collection, we found that students spent 4.13 and 3.23 times more minutes studying the annotated examples in Progressor and Progressor+ than in QuizJET. These differences were both significant. With adaptive navigation support and social visualizations combined, students studied more. Overall, each student averaged nearly 5 hours of work on the quizzes in Progressor+ and 5 hours 20 minutes on studying the annotated examples. These numbers alone demonstrate that our approach successfully engaged students in working on these nonmandatory systems. In addition, we found that the more time the students spent on one type of content in Progressor+, the more likely they were to spend more time on another type of content (r = 0.81, p < 0.01). Yet does longer engagement lead to better learning? We will discuss the effects on students learning in the next section.

Table 8. Intensity Measures of Students Work for All Conditions

Intensity

 

QuizJET

JavaGuide

Progressor

Progressor+



Quiz

Time/session

(min)

16.01

36.28

26.75

57.32**

Total time

(min)

60.04

150.19**

224.7**

296.9**

Attempts/session

21.55

30.31

24.49

36.73

Example

Example

Time/session

(min)

15.73

22.66

20.12

65.00**

Total time

(min)

69.52

121.23

110.66

321.1**

Examples/session

2.45

3.69

4.56

5.54

Lines/session

23.54

21.79

34.95

38.69


** P<0.01

IMPACT ON STUDENTS LEARNING

Our approach to educational innovation is not complete without an analysis of its impact on students learning. Our approach demonstrated an impressive motivational and engagement effect on students. However, we are mindful that students might learn a subject in many ways (e.g., labs, lectures, and assignments). To demonstrate the efficacy of our approach, we need to show that students activities with the systems were transformed into learning gains. Therefore, in this subsection we focus on the association of students interactions with Progressor+ and their learning results. We consider the results of pre- and post-test scores to determine general learning gains.

We performed paired sample t-tests to evaluate the significance of the students absolute knowledge gain. We found that students who used Progressor+ indeed achieved post-test scores (M = 15.0, SD = 0.6) that were significantly higher than their pretest scores (M = 3.2, SD = 0.5), t(37) = 17.276, p < 0.01. In addition, we performed a one-way, between-subjects analysis of variance on normalized knowledge gain as a function of four different systems (QuizJET, JavaGuide, Progressor, and Progressor+). We found that students obtained a significant normalized knowledge gain by working on the self-assessment questions through Progressor+ (M = 0.581, SE = 0.050) compared to QuizJET (M = 0.361, SE = 0.050), F(1, 52) = 4.223, p < 0.05, η2 = 0.025. Following previous motivational and engagement analyses, we also found that the more the students studied (more lines), the more knowledge they gained (r = 0.492, p < 0.01). The more time the students spent on the content (quizzes and examples), the more knowledge they gained (r = 0.563, p < 0.01; r = 0.448, p < 0.01).

Figure 11. Students time spent on examples and quizzes in Progressor+, sorted by knowledge gain

[39_21773.htm_g/00023.jpg]

THE MECHANISM OF SOCIAL GUIDANCE

Our study demonstrated that social guidance could match or even surpass traditional knowledge-based guidance in its ability to guide students to the right content at the right time. But what is the mechanism of social guidance? Why is progress data collected from the class and presented in visual form able to provide this remarkable quality of guidance, matching guidance based on expert knowledge? An important mechanism of our approach was to provide social guidance where stronger students would leave tracks for weaker ones to follow. However, that pattern was only found within the context of self-assessment quizzes. Do we find the same pattern among multiple collections of educational resources? Are the stronger students still capable of pioneering a good route for the class? Are there any other social mechanisms and effects derived from Progressor+? In this subsection, we summarize the findings of social visualizations and plot system interactions for pattern discovery.

Figure 12. Student attempts distributed by time and question complexity (top left [a]: QuizJET; top right [b]: JavaGuide; bottom left [c]: Progressor; bottom right [d]: Progressor+)

[39_21773.htm_g/00025.jpg]

[39_21773.htm_g/00027.jpg]

In Figure 12, we plotted all the students activities on the four systems (QuizJET, JavaGuide, Progressor, and Progressor+) by time and question complexity. Time of interaction is marked on the x-axis, and question complexity goes from easy to complex on the y-axis. Each data point represents an attempt at a question. The blue dots belong to the stronger students, and the orange ones to the weaker students. By visualizing all of the interactions performed in the systems, we observed several interesting findings.

"

Under all conditions, students were found actively working with the systems during exam preparation periods. They tended to work on the topics from past to current. During the final exam period, students tended to review the full range of topics. Because the subject is inherently cumulative in nature, we expected to find this pattern as a stable effect.

"

With topic-based personalization (b, c, and d), noticeable trends indicated that students progressed, which resulted in more work done according to the lecture stream. This is an important demonstration that students were focusing beyond the current scope. Without such personalization (a), students were only found to work on the systems for exam preparation, yielding a very skewed distribution of attempts.

"

Differences in the amount of work (attempts) were noticeable from the top two figures (a and b) to the bottom two figures (c and d). The bottom two figures (c and d) represent the systems with the influence of social visualization, which resulted in a higher intensity of attempts. This not only demonstrated that the students were voluntarily engaging with the systems, but also showed consistency of the motivational effect over time.

"

The timing for beginning work in the system was also revealed by the differences in pre-knowledge levels with social visualization mediation, where pre-knowledge levels were determined by pretest scores (ranging from a minimum of 0 to a maximum of 20, with the threshold score at 7); strong students scored 7 points or higher (713), and weak students scored less than 7 (06). The strong students tended to explore the questions before the weaker ones did (the blue dots go before the orange dots) in social visualization systems (c and d). In Table 9, we calculated the average time that the strong students attempted the question before the weak students did across all ranges of question complexity. On average, strong students worked on the questions 38.04 and 37.70 hours in advance of the weaker students. The effect was much more noticeable for complex questions. This indicates a useful pattern of implicit social guidance: Stronger students left tracks for weaker students to follow. Without the social guidance, no clear patterns were found (a and b). Strong and weak students actions were mixed. Strong students may be underchallenged, while weak students may suffer from venturing too fast toward advanced questions.

"

A model-exposure difference was found between the two social visualization systems (c and d). Both Progressor and Progressor+ users were exposed to the entire model, from each individuals to the class. However, the pie-shaped model in Progressor took a relatively bigger portion of the space on the screen compared to the table model in Progressor+. The model thumbnails preview was limited by screen size, with the result that only the top students from the class were presented at first glance in Progressor; students had to scroll down the sorted model list to see the rest of the models. In Progressor+, on the other hand, less scrolling was required to view the complete model list. In other words, the top students models seemed to stand out as highlighted models in Progressor. This may have given an extra incentive to the top students, which resulted in encouraging competitiveness and hard work. Therefore, the model-exposure  differences explained why the stronger students tended to work more in Progressor than in Progressor+.

Table 9. Average Number of Hours that Stronger Students Attempted to Answer Questions Before Weaker Students Did, by Question Complexity

 

Easy

Moderate

Complex

Average

Progressor

17.15

13.39

83.59

38.04

Progressor+

 9.17

19.63

84.30

37.70


SUBJECTIVE EVALUATION

In addition to the log analysis, we distributed questionnaires to collect students opinions of the Progressor+ system at the end of the classroom study. There were 24 students who filled out the survey, 17 male and 7 female. The survey consisted of 23 questions, with subjects ranging from the usability of GUI elements to the users satisfaction with the interface in general. Users were asked to evaluate the questions on a 5-point Likert scale: 1Strongly Disagree; 2Disagree; 3No Strong Opinion; 4Agree; 5Strongly Agree. They were also advised to provide free-text comments as they wished. The 23 questions covered five categories, including usefulness, ease of use, ease of learning, satisfaction, and privacy and data-sharing. A summary of the survey is shown in Figure 13.

Students generally felt positively about all aspects of the system, and they were particularly appreciative of the ease of use, ease of learning, and privacy and data-sharing dimensions. Additionally, students found the content provided valuable, given that it was optional for the class. Although there were various opinions on specific interface features, such as sorting and comparing, the overall attitude toward the systems usefulness was positive. The survey results support the design of the interface in terms of content organization. Students positive responses also reinforce the objective system usage data. We also found that some of the students expressed their appreciation explicitly in ways other than ratings. Some of them wished the tools were offered for other courses, and some even suggested alignment of the content for exams or usage for participation or credits.

Figure 13. Summary of responses to Progressor+ subjective evaluation survey questions

[39_21773.htm_g/00029.jpg]

SUMMARY AND DISCUSSION

To explore the value of open social student modeling, we developed several systems and conducted classroom studies to evaluate hypotheses and overall effectiveness. We learned from our experiences and improved the design for each implementation.

RESULTS SUMMARY

The classroom evaluation of our approach demonstrated that we achieved our main goal: helping students to navigate a rich collection of learning resources. Providing navigation support through open social student modeling visualizations helped students locate the most relevant content and achieve a significantly higher programming problem-solving success rate. In addition, incorporating a mixed collection of content in the OSSM visualizations effectively led the students to work at the right level of questions. Both stronger and weaker students worked with the appropriate levels of questions for their readiness, which yielded consistent performance across all three levels of complexity. Additionally, providing a more realistic content collection on the navigation-supported OSSM visualizations resulted in uniform performance for the group. The classroom study revealed a clear pattern of social guidance, where the stronger students left tracks for the weaker ones to follow. This effect was much more noticeable for complex problems.

The analysis of our approach confirms that students spent more time on the system, attempted more self-assessment quizzes, and explored more annotated examples. They achieved a higher diversity in attempting the self-assessment questions and exploring the annotated examples. Students were motivated to do more work. They were engaged with the system; they spent about 5 hours on each collection. Moreover, they successfully achieved better learning results. Students obtained significantly higher knowledge gains than students without the support of the system.

The subjective evaluation results showed that student generally felt positively about all aspects of the tool, particularly in terms of ease of use, ease of learning, and privacy and data-sharing. Additionally, students found the content provided valuable. Although there were various opinions on specific interface features, such as sorting and comparing, the overall attitude toward the systems usefulness was positive. These survey results confirm the design of the interface in terms of content organization. The students positive responses also complement the objective system usage data.

CONTRIBUTION TO THE EDUCATION FIELD

In this paper, we presented a series of innovative OSSM interfaces to support the online learning of a programming language. We summarized the lessons learned from the design and classroom studies. We observed the impact of each design.

The first contribution of this project is combining the ideas of adaptive navigational support and social visualization by using an open social student modeling interface. The combined approach lowers the modeling complexity for knowledge-based personalization and increases the precision of social navigation support among the increasingly large and diverse number of educational resources. This approach decreases the threshold for semantic-enriched online education. It also brings online education closer to the modern classroom. In addition, the approach has been proven to effectively guide students to the right content at the right time. It could be one of the pioneer works in the OSSM realm.

Second, this work summarized the design principles for personalized guidance using OSSM visualization, based on a series of pre-studies.

Third, this work established a scalable framework based on the design principles. The implementation Progressor+ was evaluated in this study. This framework allows extending the content collections to simulate a more realistic online learning environment. In addition to e-learning, the classroom study also demonstrated that the tool can be used as a complementary tool for real classrooms.

Fourth, the underlying theories of adaptive navigational support and social visualization actually complement each other when brought together. According to learners choices and beliefs about self-testing studies, students are generally overconfident about their memories and underestimate the amount they will learn by studying (Kornell & Son, 2009). The overconfidence of understanding is more severe among less advanced learners (Falchikov & Boud, 1989) who most need to improve (Falchikov & Goldfinch, 2000). Therefore, this work unveiled the social comparison mechanism by providing comparative interfaces and demonstrating the stronger and weaker students performances through quantitative analyses with this approach.

LIMITATIONS AND FUTURE WORK

All of the systems discussed were provided as supplemental tools for the same course. While we attempted to provide as realistic a scenario as possible by incorporating diverse learning objects for the learning environment, within the non-controlled classroom context students are still able to learn the subject in many different ways (e.g., having hands-on experience in coding plays a very important role in the programming-language learning context. In our curriculum, students claimed to benefit the most from laboratory sessions). The system used is just one of the factors that contributed to learning. The content collections used in this work did not cover all of the knowledge taught in the programming course. However, we took into account semantics questions when measuring students learning.

The first OSSM interface was introduced in the spring semester of 2010, while the latest system, Progressor+, was introduced in the spring semester of 2012. Because social technology is rapidly evolving, students could potentially have been exposed to mass social media within these two years and gradually become more comfortable with using social tools. Our study is not able to account for this phenomenon.

In addition, we recognize that current design supports implicit guidance, while adaptive navigation support provides personalized progress guidance and the wisdom of the crowd leads the learning paths. In the future, we plan to enhance the explicit guidance, for example by providing recommendations. Using information about peers' prior success may allow us to recommended suitable topics to students where they have just failed. While the explicit recommendations in the user model visualization suggest more proactive personalized guidance, we will be facing the challenge of implementing this personalization without decreasing users' interest in making comparisons with their peers. However, we think that such issues can be addressed by enhancing the visualization, for example by using different transparency levels to mark recommended and non-recommended topics.

References

Ahn, J. W., Brusilovsky, P., Grady, J., He, D., & Syn, S. Y. (2007). Open user profiles for adaptive news systems: Help or harm?. In Proceedings of the 16th international conference on World Wide Web (WWW 07), 1120. New York, NY: Association for Computing Machinery. doi:10.1145/1242572.1242575

Azevedo, R., Guthrie, J. T., & Seibert, D. (2004). The role of self-regulated learning in fostering students conceptual understanding of complex systems with hypermedia. Journal of Educational Computing Research, 30(1), 87111.

Bakalov, F., König-Ries, B., Nauerz, A., & Welsch, M. (2010). IntrospectiveViews: An interface for scrutinizing semantic user models. In P. De Bra, A. Kobsa, & D. Chin (Eds.), User modeling, adaptation, and personalization (pp. 219230). Berlin, Germany: Springer-Verlag.

Barolli, L., Koyama, A., Durresi, A., & De Marco, G. (2006). A web-based e-learning system for increasing study efficiency by stimulating learner's motivation. Information Systems Frontiers, 8(4), 297306.

Brusilovsky, P. (2007). Adaptive navigation support. In P. Brusilovsky, A. Kobsa, & W. Neidl (Eds.), The adaptive web: Methods and strategies of web personalization (Vol. 4321, pp. 263290). Berlin, Germany: Springer-Verlag.

Brusilovsky, P., Chavan, G., & Farzan, R. (2004, August). Social adaptive navigation support for open corpus electronic textbooks. Paper presented at the Third International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems (AH 2004), Eindhoven, the Netherlands.

Brusilovsky, P., Hsiao, I. H., & Folajimi, Y. (2011). QuizMap: Open social student modeling and adaptive navigation support with TreeMaps. In C. D. Kloos, D. Gillet, R. M. Crespo Garcia, F. Wild, & M. Wolpers (Eds.), Towards ubiquitous learning (pp. 7182). Berlin, Germany: Springer-Verlag.

Brusilovsky, P., Sosnovsky, S., & Shcherbinina, O. (2004, November). QuizGuide: Increasing the educational value of individualized self-assessment quizzes with adaptive navigation support. Paper presented at the World Conference on E-Learning (E-Learn 2004), Washington, DC.

Brusilovsky, P., Sosnovsky, S., & Yudelson, M. (2009). Addictive links: The motivational value of adaptive link annotation. New Review of Hypermedia and Multimedia, 15(1), 97118.

Bull, S. (2004, September). Supporting learning with open learner models. Paper presented at the 4th Hellenic Conference on Information and Communication Technologies in Education, Athens, Greece.

Bull, S., & Britland, M. (2007, July). Group interaction prompted by a simple assessed open learner model that can be optionally released to peers. Paper presented at the Workshop on Personalization in E-learning Environments at Individual and Group Level at the 11th International Conference on User Modeling (UM 2007), Corfu, Greece.

Bull, S., & Kay, J. (2007). Student models that invite the learner in: The SMILI:() open learner modelling framework. International Journal of Artificial Intelligence in Education, 17(2), 89120.

Bull, S., & Kay, J. (2013). Open learner models as drivers for metacognitive processes. In R. Azvedo & V. Aleven (Eds.), International handbook of metacognition and learning technologies (pp. 349365). New York, NY: Springer.

Buunk, A. P., & Gibbons, F. X. (2007). Social comparison: The end of a theory and the emergence of a field. Organizational Behavior and Human Decision Processes, 102(1), 321.

Chen, Z.-H., Chou, C.-Y., Deng, Y.-C., & Chan, T.-W. (2007). Active open learner models as animal companions: Motivating children to learn through interacting with My-Pet and Our-Pet. International  Journal of Artificial Intelligence in Education, 17(2), 145167.

Cialdini, R. B., Wosinska, W., Barrett, D. W., Butner, J., & Gornik-Durose, M. (1999). Compliance with a request in two cultures: The differential influence of social proof and commitment/consistency on collectivists and individualists. Personality and Social Psychology Bulletin, 25(10), 12421253.

Corbett, A. T., Anderson, J. R., & OBrien, A. T. (1995). Student modeling in the ACT programming tutor: Adjusting a procedural learning model with declarative knowledge. In P. Nichols, S. Chipman, & B. Brennan (Eds.), Cognitively diagnostic assessment (pp. 1941). Hillsdale, NJ: Erlbaum.

Darnon, C., Dompnier, B., Gilliéron, O., & Butera, F. (2010). The interplay of mastery and performance goals in social comparison: A multiple-goal perspective. Journal of Educational Psychology, 102(1), 212.

Dieberger, A. (1997). Supporting social navigation on the World Wide Web. International Journal of Human-Computer Interaction, 46, 805825.

Dieberger, A., Dourish, P., Höök, K., Resnick, P., & Wexelblat, A. (2000). Social navigation: Techniques for building more usable systems. Interactions, 7(6), 3645.

Dijkstra, P., Kuyper, H., Werf, G. v. d., Buunk, A. P., & Zee, Y. G. v. d. (2008). Social comparison in the classroom: A review. Review of Educational Research, 78(4), 828879.

Dimitrova, V. (2003). STyLE-OLM: Interactive open learner modelling. International Journal of Artificial Intelligence in Education, 13(1), 3578.

Dimitrova, V., Self, J., & Brna, P. (2001, July). Applying interactive open learner models to learning technical terminology. In International conference on user modeling (pp. 148157). Heidelberg, Germany: Springer Berlin.


Falchikov, N., & Boud, D. (1989). Student self-assessment in higher education: A meta- analysis. Review of Educational Research, 59(4), 395430.

Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70(3), 287322.

Farzan, R., & Brusilovsky, P. (2008). AnnotatEd: A social navigation and annotation service for web-based educational resources. New Review in Hypermedia and Multimedia, 14(1), 332.

Festinger, L. (1954). A theory of social comparison processes. Human Relations, 7, 117140.

Freeman, L. C. (2000). Visualizing social networks. Journal of Social Structure, 1, 115.

Gershon, N., & Page, W. (2001). What storytelling  can do for information visualization. Communications of the ACM,  44(8), 3137.

Hsiao, I.-H., Bakalov, F., Brusilovsky, P., & König-Ries, B. (2011, July). Open social student modeling: Visualizing student models with parallel introspective views. Paper presented at the 19th International Conference on User Modeling, Adaptation and Personalization (UMAP 2011), Girona, Spain.

Hsiao, I. H., & Brusilovsky, P. (2012). Motivational social visualizations for personalized E-learning. In A. Ravenscroft, S. Lindstaedt, C. Delgado Kloos, & D. Hernández-Leo (Eds.), 21st century learning for 21st century skills (pp. 153165). Berlin, Germany: Springer.

Hsiao, I.-H., Brusilovsky, P., & Sosnovsky, S. (2008, November). Web-based parameterized questions for object-oriented programming. Paper presented at the World Conference on E-Learning (E-Learn 2008), Las Vegas, NV.

Hsiao, I.-H., Sosnovsky, S., & Brusilovsky, P. (2010). Guiding students to the right questions: Adaptive navigation support in an e-learning system for Java programming. Journal of Computer Assisted Learning, 26(4), 270283.

Huguet, P., Dumas, F., Monteil, J. M., & Genestoux, N. (2001). Social comparison choices in the classroom: Further evidence for students' upward comparison tendency and its beneficial impact on performance. European Journal of Social Psychology, 31(5), 557578.

Kaplan, A., & Maehr, M. L. (2007). The contributions and prospects of goal orientation theory. Educational Psychology Review, 19(2), 141184.

Kay, J. (1997, December). Learner know thyself: Student models to give learner control and responsibility. Paper presented at the, International Conference on Computers in Education (ICCE97), Kuching, Malaysia.

Kay., J. (2008). Lifelong learner modeling for lifelong personalized pervasive learning. IEEE Transactions on Learning Technologies, 1(4), 215228.

Kornell, N., & Son, L. K. (2009). Learners' choices and beliefs about self-testing. Memory, 17(5), 493501. doi:10.1080/09658210902832915

Kurhila, J., Miettinen, M., Nokelainen, P., & Tirri, H. (2006). Educo: A collaborative learning  environment based on social navigation. In P. De Bra, P. Brusilovsky, & R. Conejo (Eds.), Adaptive hypermedia and adaptive web-based systems (pp. 242252). Berlin, Germany: Springer.

Light, P., Littleton, K., Bale, S., Joiner, R., & Messer, D. (2000). Gender and social comparison effects in computer-based problem solving. Learning and Instruction, 10(6), 483496.

Linton, F., Joy, D., Schaefer, H. P., & Charron, A. (2000). OWL: A recommender system for organization-wide learning. Educational Technology & Society, 3(1), 6276.

Mabbott, A., & Bull, S. (2004). Alternative views on knowledge: Presentation of open learner models. In J. C. Lester, R. M. Vicari, & F. Paraguaçu (Eds.), Intelligent tutoring systems (pp. 689698). Berlin, Germany: Springer.

Mazza, R., & Dimitrova, V. (2007). CourseVis: A graphical student monitoring tool for supporting instructors in web-based distance courses. International  Journal of Human-Computer Studies, 65(2), 125139. doi:10.1016/j.ijhcs.2006.08.008

Méndez, J. A., Lorenzo, C., Acosta, L., Torres, S., & González, E. (2006). A web-based tool for control engineering teaching. Computer Applications in Engineering Education, 14(3), 178187. doi:10.1002/cae.20080

Mitrovic, A., & Martin, B. (2007). Evaluating the effect of open student models on self- assessment. International Journal of Artificial Intelligence in Education, 17(2), 121144.

Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82(1), 33.

Rao, R., & Card, S. K. (1994). The table lens: merging graphical and symbolic representations in an interactive focus + context visualization for tabular information. In B. Adelson, S. Dumais, & J. Olson (Eds.), Proceedings of the SIGCHI conference on human factors in computing systems (pp. 318322). doi:10.1145/191666.191776

Shneiderman, B. (1996, September). The eyes have it: A task by data type taxonomy for information visualizations. Paper presented at the IEEE Symposium on Visual Languages, Washington, DC.

Shneiderman, B. (2004). Treemaps for space constrained visualization of hierarchies: An historical summary of Treemap research and applications. Retrieved from http://www.cs.umd.edu/hcil/treemaps/

Triantafillou, E., Pomportsis, A., Demetriadis, S., & Georgiadou, E. (2004). The value of adaptivity based on cognitive style: An empirical study. British Journal of Educational Technology, 35(1), 95106.

Tufte, E. R. (1990). Envisioning information. Cheshire, CT: Graphics Press.

Vassileva, J. (2008). Toward social learning environments. IEEE Transactions on Learning Technologies, 1(4), 199214.

Vassileva, J., & Sun, L. (2007). Using community visualization to stimulate participation in online communities. E-Service Journal, 6(1), 340.

Vassileva, J., & Sun, L. (2008). Evolving a social visualization design aimed at increasing participation in a class-based online community. International Journal of Cooperative Information Systems, 17(4), 443466.

Wattenberg, M. (1999, May). Visualizing the stock market. In CHI'99 extended abstracts on Human factors in computing systems (pp. 188-189). ACM.


Weber, G., & Brusilovsky, P. (2001). ELM-ART: An adaptive versatile system for Web-based instruction. International Journal of Artificial Intelligence in Education, 12(4), 351384.


Wexelblat, A. (1999, January). History-based tools for navigation. In Systems sciences, 1999. HICSS-32. Proceedings of the 32nd Annual Hawaii International Conference on Systems Science (pp. 12pp). IEEE.


Zapata-Rivera, J.-D., & Greer, J. E. (2000, June). Inspecting and visualizing distributed Bayesian student models. Paper presented  at the 5th International conference on Intelligent Tutoring Systems (ITS 2000), Montreal, Canada.

Zimmerman, B. J. (1990). Self-regulated learning and academic achievement: An overview. Educational Psychologist, 25(1), 317. doi:10.1207/s15326985ep2501_2




Cite This Article as: Teachers College Record Volume 119 Number 3, 2017, p. 1-42
https://www.tcrecord.org ID Number: 21773, Date Accessed: 2/19/2022 12:16:36 AM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • I-Han Hsiao
    Arizona State University
    E-mail Author
    I-HAN HSIAO is Assistant Professor of Computer Science at Arizona State University. She is actively involved in the A.I.-supported education for computer science (AIEDCS) and computer-supported peer review in education (CSPRED) communities through organizing and co-chairing workshops. Her expertise and research interests encompass adaptive educational technology, educational technology evaluation, open student modeling, and visual learning analytics. She has also been teaching CS1 and CS2 courses for more than eight years. Dr. Hsiao received her Ph.D. in Information Sciences from the University of Pittsburgh in 2012.
  • Peter Brusilovsky
    School of Information Sciences, University of Pittsburgh
    E-mail Author
    PETER BRUSILOVSKY is Professor of Information Science and Intelligent Systems at the University of Pittsburgh, where he also directs the Personalized Adaptive Web Systems (PAWS) lab. Dr. Brusilovsky has been working in the field of adaptive educational systems, user modeling, and intelligent user interfaces for more than 25 years. He has published numerous papers and edited several books on adaptive hypermedia and the adaptive web. Peter is the editor-in-chief of IEEE Transactions on Learning Technologies and a board member of several journals, including User Modeling and User Adapted Interaction and ACM Transactions on Social Computing. Dr. Brusilovsky received his Ph.D. in Computer Science from Moscow State University in 1987. He also holds a Doctor honoris causa degree from the Slovak University of Technology in Bratislava.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS