SWRM is a software tool developed in R and Node.js to plug into a browser and help learners see in near real-time what their colleagues are finding and how it fits with what they are reading as they research together.
Picture this common scene: Undergraduate psychology students have been split into subgroups during a seminar and asked to research different aspects of social influence research. They’ll go away, read some stuff, discuss it, and present their findings back to the whole group. We can visualise it like this, with time on the y axis and conceptual convergence on the x:
How can a data-analytic tool help with this? SWRM acts at each stage: During exploration and notation, students may click a button in their browser to flag a source they have found. As each student in the group does this, these sources are reviewed by the programme using latent semantic analysis to establish a conceptual space within which the source sits. As more sources are collected, each one will sit in a different space, but more or less conceptually ‘close’ to each other source. SWRM prunes the connections between the sources using a modified Prim’s algorithm I developed, and visualises them as nodes in a graph, with their connections as vertices:
By adding a simple physics engine, the graph can give more information. ‘Elasticity’ of a vertex denotes the two nodes’ conceptual similarity, and where there are multiple sources existing in proximity, they will support one another, making those parts of the graph feel stiffer. be manipulated by the learner, which allows a feel for what the relationship between the sources is. Consequently, the location of a node is irrelevant, but its position relative to another is informative.
In order to conceptualise and build this tool I turned to users. I initially conceived of the idea alone, but then turned to learners to establish their experience of the process. It was with their input that the task structure was brought into focus. Following initial design and development efforts, I ran collaborative design and testing workshops with postgraduate trainee teachers to establish the effectiveness of the program and to attempt to better understand how the presence of a tool like this shapes their behaviour. In doing this, I found that users would see where arguments are lacking corroboration at a glance and look for it. They can investigate the extent to which a source is different from the others they have found and decide whether it should continue to be included in their findings. In essence, it preempts the ‘Discussion’ phase of the task and means that some amount of groundwork for lexical and conceptual convergence is laid during the exploration/notation phase that would otherwise have had to be negotiated through discussion.
Why did I design and build this? Personally speaking, research is fine, but I felt like I needed to make something alongside it to put some of what I had been learning into practice. Practically speaking, the aim of the tool is to respectively to develop a data-analytic tool to support collaborative research activity by helping the lexical convergence of the collaborators and thereby to find shared meaning in the content of the task. Finally, the ‘big picture aim’ is to lend credence to the idea that data-analytic tools have vast potential to enhance learning, and to create a basis for the development of more intelligent research supporting systems to be developed.
The outcome was a proof of concept tool that was presented at the British Educational Research Association Annual Conference. The abstract is here, a more detailed writeup can be found here and my presentation slides here. Personally, I got to develop a software tool which I otherwise wouldn’t have had the chance to make, to engage in Ux research and data-analytic tool design. Also, it was a lot of fun!
If you would like to discuss this further please contact me