Categories
Block 1: Learning with Data Week 5

Block 1 Visualisation: Learning with Data

Note – sorry to publish this so late in the week.

Below this text you will see my final data visualisation for Block 1: ‘Learning With Data’. Beneath the visualisation, I have provided a reflection to explain my thought process and the themes I have tried to convey on the topic of learning with data.

My final visualisation for this block attempts to capture the complexity of learning data, focussing on distinctions of what can and can’t be captured by digital platforms. In the vein of Lupi and Posavec (2016) I created a legend beneath my visualisation that helps to explain what each of the abstract icons, shapes, and colours represent in the visualisation.

Reflections

My visualisation documents how I’ve engaged in this block’s learning activities over the past three weeks. In contrast to the limitations of learning analytics platforms, I’ve attempted to also record the activities that happen outside of institutionally licensed platforms, such as how unsupported personal resources are used. This includes both digital e.g. WhatsApp and nondigital e.g. drawing, or spoken conversations.

According to technology companies and government think-tanks, education has a serious problem. In an age where almost everything is personalised based on big data sets and content recommendation algorithms, education has largely resisted this technosolutionistic approach. This has led to accusations from the ‘LAW Report’ that educational institutions are ‘driving blind’ (Friesen, 2019) by not exploiting the apparent potential of big data.

However, as someone who both works in – and is a student in – higher education, I wanted to illustrate through documenting my learning practices how learning doesn’t just happen in digital platforms. Nor does it happen within the walls (physical or virtual) of a school or university. Learning happens in many different spaces and any personalised learning technology that is solely dependent on digital data risks disregarding human factors and the socio-cultural contexts in which the data is generated (Perrotta, 2013, cited in Yi-Shan, Perrotta & Gasevic, 2020: 555).

There are many ways in which one can engage in learning, such as reading, note-taking, watching videos, or having discussions with others. I recorded these learning engagements in my visualisation through the use of simplified icons. Reflecting on this visualisation now, a glaring omission I see is the act of ‘thinking’. More than anything else during this block, I spent by far the most time wrangling with ideas in my head for my visualisation and trying to synthesise concepts from different reading with my learning data. What I wanted to convey about through this visualisation, is that only I (the individual/the student) have a complete picture of how I learn. And there are a lot of activities that contribute to my learning that go unseen to both teachers and technology companies.

I chose to make three distinctions between the way my data activities were recorded. At the top of the visualisation, illustrated by the small green rectangles are activities that are authored in a digital form, using software that is licensed and/or controlled by Edinburgh University. This includes the likes of the WordPress website, Moodle, and it’s connected technologies such as the Library reading list. All of my interactions in these environments can be potentially be seen by staff at Edinburgh University and in theory a profile can be built up of my ‘learning’ based upon engagements that happen within these tools. This is the space in which technology companies want to occupy and expand.

Being used to recording data in digital form, I still made an initial spreadsheet to keep track of my learning interactions.

However, there is a lot of activity and communication that happens outside of this centrally-supported space, but still within digital tools. The is the next category I defined through the use of brown rectangles. This may include technologies that tutors can see, such as Twitter, but in fact a lot of this data is generated in tools that the university doesn’t have sight of. For instance, discussions I had in WhatsApp and over Teams with my colleagues at work, or digital notes using personal apps e.g. OneNote, or digital note-taking apps.

The final category represented by grey squares are those learning interactions that happen entirely offline, unseen by both the University and EdTech companies. This includes activities such as sketching my visualisations, handwritten notes, and having verbal discussions with work colleagues relating to the course. The three colours; green, brown and grey were actually borrowed from an earlier abandoned idea of representing this data as an aerial view of farmers fields, playing on the visual metaphor of data harvesting. The idea here being that the easiest data to see and manipulate is that at the surface, but there is a lot of data beyond reach and some that is almost impossible to get to. The idea here is that while I have a view of my learning data, the platforms that I use only see the interactions I make within their environments.

Alongside each activity icon, I placed a coloured dot representing the actors that see these learning interactions. For data that is authored online in an open environment like the WordPress website, this could be seen by many people including the University, tech companies e.g. hosting, my fellow peers, and at a surface-level the public. At the other end of the scale, all of the interactions that happened offline can only be seen by me.

In bringing all of these ideas together, I have attempted to illustrate that personalised learning solutions are flawed in that they can only see a partial representation of my (a student’s) activity. And this is dangerous, as this limitation doesn’t appear to be deterring tech companies from still trying to apply the solutions that recommend purchases on Amazon or movies to watch on Netflix to education (Bulger, 2016: 2).

For BigTech companies, the frustration here is that learning appears to be messy and spread out across different environments. It’s not what software engineers are used to, who would typically want to apply an algorithmic solution to such problem. It may therefore not come as a surprise that EdTech companies want to remove the likelihood of learning activities taking place in these third-party and offline spaces. Instead they’d like to provide the entire data and learning infrastructure for education so they have a panoramic view of students’ learning and can provide more opportunities to make money from education which is seen as a largely untapped space. However some attempts to provide not only the software but also the hardware for education has surfaced some quite concerning ethical issues, as evidenced when Google were sued for collecting student data through Chromebooks. An allegation they denied, but settled for a mere $170m.

Such reductionist views of learning that force students into practices approved by technology companies inevitably remove student agency and practices outside of the digital ecosystem would be seen as undesirable. As stated by Eynon (2015) data-centric approaches to learning would not be aware of broader social settings, which increases the likelihood that those who aren’t “performing” as well would be written off as a problem. Whereas the reality is they’re just not spending as much time online as others. This is at odds with self-determination theory that posits “students need autonomy (belief that they have choice and independence in identifying and pursuing goals)” (Bulger, 2016: 13).

In my visualisation and this short reflection, I’ve only been able to scrape the surface of learning with data, but hopefully I have been able to communicate that learning is fragmented and happens in many different spaces for which connections do not always exist. There is a broader perspective of learning offered here, that whilst only being surface-level data, arguably creates a more comprehensive picture of learning than offered by any analytics dashboard or personalised learning solution.

Bibliography

Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Eynon, R. (2015) The quantified self for learning: critical questions for education, Learning, Media and Technology, 40:4, 407-411, DOI: 10.1080/17439884.2015.1100797

Friesen, N. 2019. “The technological imaginary in education, or: Myth and enlightenment in ‘Personalised Learning.” In M. Stocchetti (Ed.), The digital age and its discontents. University of Helsinki Press.

Statt, Nick. ‘Google Sued by New Mexico Attorney General for Collecting Student Data through Chromebooks’. The Verge, 20 Feb. 2020, https://www.theverge.com/2020/2/20/21145698/google-student-privacy-lawsuit-education-schools-chromebooks-new-mexico-balderas.

Tsai, Y-S. Perrotta, C. & Gašević, D. 2020. Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics, Assessment & Evaluation in Higher Education, 45:4, 554-567, DOI: 10.1080/02602938.2019.1676396

4 replies on “Block 1 Visualisation: Learning with Data”

The data visualizations and reflections you have produced here are excellent, Ross. What comes through most clearly is your informed scepticism about claims of making ‘learning’ visible from the data traces that can be gathered from and about students. Several scholars in the field of critical data studies make similar observations about the partiality of data and its visualizations, and these are important insights to extend to the educational context. What you are highlighting is how data visualizations may be understood as kinds of ‘graphical facts’ that represent distinctive parts of the world. Your own visualization demonstrates the problem of such claims, because, as you have noted, the data traces required to produce such graphical representations are and can only be collected from a very slim selection of sources. Your own data recording has expanded the gaze to account for a wider variety of technologies than those strictly considered ‘edtech’, but even then there are sharp limits on what can be ‘known’ from the data they collect. A lot of learning analytics specialists would say that of course these data traces are reductionist, but that they serve as useful proxies of ‘learning’. But the question for me, and I think what you are highlighting, is that such claims must take a particular view or theory of what ‘learning’ is in the first place. I was interested very recently to see a new research article detailing the dominant theories of learning in the learning analytics field (https://link.springer.com/article/10.1007/s12528-022-09340-3). The main finding is that ‘self-regulated learning’ appears to be the main theory in the field. The open question for me is whether this is the dominant theory in the field because it is particularly amenable to being captured as data? What I believe you are suggesting is there may be very different ways of conceptualizing learning that would not be possible to capture in the form of data traces.

Hi Ben, thanks a lot for providing such rich and timely feedback. You’ve certainly captured the argument I was trying to convey through my visualisation and accompanying reflection.

“A lot of learning analytics specialists would say that of course these data traces are reductionist, but that they serve as useful proxies of ‘learning’”.

That’s true and something I picked up on from the article that details the discussions between Gasevic and Selwyn on this topic. Despite my scepticism, I do think that in a limited capacity, technology can in some form provide proxies of ‘learning’ in the future of education. Maybe more accurately, this will be proxies of engagement.

However, while scholars such as Gasevic speak in more humble and pragmatic terms about the capabilities and limitations of technology, I certainly don’t think this critical view is shared across the EdTech sector. I can’t for one moment believe that BigTech companies would concede that their solutions have limitations and are limited in their scope and application. Why would they, when their marketing tactics have worked so well in many other sectors.

Thanks for the link to the new article on ‘the use and application of learning theory in learning analytics’. This is really interesting and somewhat unsurprising that ‘self-regulated learning’ is the dominant theory. After all this aligns well with learner-centric technology solutions. In my practice to date, the most prominent theories of learning are those that focus on social constructivist approaches. I suspect this is far more difficult to quantify and monetise for tech companies.

Thanks again, lots of food for thought here.

Ross

Hi Ross
Just wanted to comment to say I really liked your idea behind your visualisation, as well as the visualisation itself. I very much agree with your point about ‘thinking time’ being learning time too, I also agree trying to capture this into data would be extremely hard if not impossible.
I think for me completing the data collection part of the exercises has highlighted just how much of that ‘learning process’ can be done away from a book/screen or tutor as I have been actively thinking about the task and its possible outcomes etc at varying points of the day, not just when I am making my notes. My ‘thinking time’ also has allowed time for reflections on what I have heard or read, essentially allowing me to process the information and come to conclusion or more often make me think of more questions!
It seems clearer now, having read your blog and Ben’s reply that none of that activity would be captured in data, even in the ‘AI classroom’ that we spoke about the other day. So would teaching practice have to change to allow for more ‘retesting’ or ‘re staging a task’ to see if a student has now grasped a concept they had been working on previously or maybe this is just still similar to current formative assessment practices? Lots to think about 🙂

Hi Jillian,

Thanks a lot for taking the time to share your reflections on this post. This is all really useful and giving me lots to mull over.

I’m so sorry for not approving this comment earlier.

Ross

Leave a Reply to Ben Williamson Cancel reply

Your email address will not be published. Required fields are marked *