Categories
Block 1: Learning with Data Week 5

Block 1 Visualisation: Learning with Data

Note – sorry to publish this so late in the week.

Below this text you will see my final data visualisation for Block 1: ‘Learning With Data’. Beneath the visualisation, I have provided a reflection to explain my thought process and the themes I have tried to convey on the topic of learning with data.

My final visualisation for this block attempts to capture the complexity of learning data, focussing on distinctions of what can and can’t be captured by digital platforms. In the vein of Lupi and Posavec (2016) I created a legend beneath my visualisation that helps to explain what each of the abstract icons, shapes, and colours represent in the visualisation.

Reflections

My visualisation documents how I’ve engaged in this block’s learning activities over the past three weeks. In contrast to the limitations of learning analytics platforms, I’ve attempted to also record the activities that happen outside of institutionally licensed platforms, such as how unsupported personal resources are used. This includes both digital e.g. WhatsApp and nondigital e.g. drawing, or spoken conversations.

According to technology companies and government think-tanks, education has a serious problem. In an age where almost everything is personalised based on big data sets and content recommendation algorithms, education has largely resisted this technosolutionistic approach. This has led to accusations from the ‘LAW Report’ that educational institutions are ‘driving blind’ (Friesen, 2019) by not exploiting the apparent potential of big data.

However, as someone who both works in – and is a student in – higher education, I wanted to illustrate through documenting my learning practices how learning doesn’t just happen in digital platforms. Nor does it happen within the walls (physical or virtual) of a school or university. Learning happens in many different spaces and any personalised learning technology that is solely dependent on digital data risks disregarding human factors and the socio-cultural contexts in which the data is generated (Perrotta, 2013, cited in Yi-Shan, Perrotta & Gasevic, 2020: 555).

There are many ways in which one can engage in learning, such as reading, note-taking, watching videos, or having discussions with others. I recorded these learning engagements in my visualisation through the use of simplified icons. Reflecting on this visualisation now, a glaring omission I see is the act of ‘thinking’. More than anything else during this block, I spent by far the most time wrangling with ideas in my head for my visualisation and trying to synthesise concepts from different reading with my learning data. What I wanted to convey about through this visualisation, is that only I (the individual/the student) have a complete picture of how I learn. And there are a lot of activities that contribute to my learning that go unseen to both teachers and technology companies.

I chose to make three distinctions between the way my data activities were recorded. At the top of the visualisation, illustrated by the small green rectangles are activities that are authored in a digital form, using software that is licensed and/or controlled by Edinburgh University. This includes the likes of the WordPress website, Moodle, and it’s connected technologies such as the Library reading list. All of my interactions in these environments can be potentially be seen by staff at Edinburgh University and in theory a profile can be built up of my ‘learning’ based upon engagements that happen within these tools. This is the space in which technology companies want to occupy and expand.

Being used to recording data in digital form, I still made an initial spreadsheet to keep track of my learning interactions.

However, there is a lot of activity and communication that happens outside of this centrally-supported space, but still within digital tools. The is the next category I defined through the use of brown rectangles. This may include technologies that tutors can see, such as Twitter, but in fact a lot of this data is generated in tools that the university doesn’t have sight of. For instance, discussions I had in WhatsApp and over Teams with my colleagues at work, or digital notes using personal apps e.g. OneNote, or digital note-taking apps.

The final category represented by grey squares are those learning interactions that happen entirely offline, unseen by both the University and EdTech companies. This includes activities such as sketching my visualisations, handwritten notes, and having verbal discussions with work colleagues relating to the course. The three colours; green, brown and grey were actually borrowed from an earlier abandoned idea of representing this data as an aerial view of farmers fields, playing on the visual metaphor of data harvesting. The idea here being that the easiest data to see and manipulate is that at the surface, but there is a lot of data beyond reach and some that is almost impossible to get to. The idea here is that while I have a view of my learning data, the platforms that I use only see the interactions I make within their environments.

Alongside each activity icon, I placed a coloured dot representing the actors that see these learning interactions. For data that is authored online in an open environment like the WordPress website, this could be seen by many people including the University, tech companies e.g. hosting, my fellow peers, and at a surface-level the public. At the other end of the scale, all of the interactions that happened offline can only be seen by me.

In bringing all of these ideas together, I have attempted to illustrate that personalised learning solutions are flawed in that they can only see a partial representation of my (a student’s) activity. And this is dangerous, as this limitation doesn’t appear to be deterring tech companies from still trying to apply the solutions that recommend purchases on Amazon or movies to watch on Netflix to education (Bulger, 2016: 2).

For BigTech companies, the frustration here is that learning appears to be messy and spread out across different environments. It’s not what software engineers are used to, who would typically want to apply an algorithmic solution to such problem. It may therefore not come as a surprise that EdTech companies want to remove the likelihood of learning activities taking place in these third-party and offline spaces. Instead they’d like to provide the entire data and learning infrastructure for education so they have a panoramic view of students’ learning and can provide more opportunities to make money from education which is seen as a largely untapped space. However some attempts to provide not only the software but also the hardware for education has surfaced some quite concerning ethical issues, as evidenced when Google were sued for collecting student data through Chromebooks. An allegation they denied, but settled for a mere $170m.

Such reductionist views of learning that force students into practices approved by technology companies inevitably remove student agency and practices outside of the digital ecosystem would be seen as undesirable. As stated by Eynon (2015) data-centric approaches to learning would not be aware of broader social settings, which increases the likelihood that those who aren’t “performing” as well would be written off as a problem. Whereas the reality is they’re just not spending as much time online as others. This is at odds with self-determination theory that posits “students need autonomy (belief that they have choice and independence in identifying and pursuing goals)” (Bulger, 2016: 13).

In my visualisation and this short reflection, I’ve only been able to scrape the surface of learning with data, but hopefully I have been able to communicate that learning is fragmented and happens in many different spaces for which connections do not always exist. There is a broader perspective of learning offered here, that whilst only being surface-level data, arguably creates a more comprehensive picture of learning than offered by any analytics dashboard or personalised learning solution.

Bibliography

Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Eynon, R. (2015) The quantified self for learning: critical questions for education, Learning, Media and Technology, 40:4, 407-411, DOI: 10.1080/17439884.2015.1100797

Friesen, N. 2019. “The technological imaginary in education, or: Myth and enlightenment in ‘Personalised Learning.” In M. Stocchetti (Ed.), The digital age and its discontents. University of Helsinki Press.

Statt, Nick. ‘Google Sued by New Mexico Attorney General for Collecting Student Data through Chromebooks’. The Verge, 20 Feb. 2020, https://www.theverge.com/2020/2/20/21145698/google-student-privacy-lawsuit-education-schools-chromebooks-new-mexico-balderas.

Tsai, Y-S. Perrotta, C. & Gašević, D. 2020. Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics, Assessment & Evaluation in Higher Education, 45:4, 554-567, DOI: 10.1080/02602938.2019.1676396

Categories
Block 1: Learning with Data Week 4

Ideas for Learning with Data Visualisation

Reflecting on my initial ideas

Throughout this block so far, the reading has encouraged me to reflect upon my own learning and ponder how learning analytics solutions may have interpreted me so far based upon my trace data (as in the data that can be traced through clicks and time-spent on site pages). What sort of profile would I have according to a digital platform and what kind of content would be recommended to me based on this profile?

In connecting this to the activity of creating a hand-drawn data visualisation, I want to explore whether I can visualise the ways in which I learn, through my actions. By recording this data in analogue form from my personal viewpoint, I can build a more holistic view of my actions and activities than any digital learning platform could ever claim to.

The seen vs the unseen (in learning)

If one thing is clear to me, it’s that learning is a very complex and messy subject that doesn’t fit neatly into data tables. Learning doesn’t sit within the confines of a school, college, or university; nor does sit exclusively within a VLE. It happens in lots of different places that are unique to each individual learner.

Photo by Ricardo Viana on Unsplash

If I think about how I learn, I spend an awful lot of time thinking, reading, and noting ideas down. Most of this will be go unseen by a learn analytics platform. So how would such systems deal with this? Would I be sent push notifications to tell me I’m falling behind? Would a flag appear beside my name as needing some form of intervention?

Any interactions I make ‘online’ will no doubt be recorded, but where those interactions take place determines who sees this data. A lot of that won’t be the university. For instance the university could see that I saw a notification, I authored a blog post, I contributed to a discussion, or clicked on an item in the reading list. But what about when I’m searching the web for ideas, trying to connect concepts, reading ‘offline’, having discussions with friends, family and even peers outside of the VLE. This will go unseen.

Ideas for a visualisation

I attempted to list the activities I engage in that all contribute to my learning. Note some activities happen ‘offline’ and some exist as idea formulation that are not documented anywhere.

I want to explore whether I can document each learning interaction I make throughout a given week. This will include the seen (let’s assume that’s anything digital) and the unseen which is not only learning that happens in analogue form (perhaps ‘offline’) also those activities that are online but happen within websites and apps that are personal to me and not seen by the university and its licensed technologies. I want to emphasise the divide between the seen and the unseen in this landscape as a vehicle to highlight the limitations of any proposed learning analytics or personal tutor type system and I’m exploring ways in which I can illustrate this for my visualisation.

I started thinking about the data I produce and how where that’s authored depends on who sees it.

Learning fields

Aerial view of fields

Photo by Peter Ford on Unsplash

I have an idea that will be a visual play on the metaphor of data harvesting. This will take the form of hand-drawn data lines that happen within silos (like farmer’s fields). Here, I’m wanting to emphasise the division between the data by using each field as a different data provider. One shape may be university servers, another licensed vendors like Microsoft, then there’s the companies that I (students) interact with outside of the university’s provision e.g. Google, Amazon Web Services, app providers, and so on. There’s ‘fields’ that more than one actor can see, for instance activity that happens within a licensed service like Microsoft could be seen by both the University staff as well as Microsoft themselves.

Then going a step further there’s the interactions that happen offline, like conversations with family and friends, reading a paper that’s been downloaded, or making notes on a locally installed text editor. You could argue this aspect of learning isn’t recorded at all. You could perhaps only record the output through an assessment for example.

Abstract representation

The concept of the seen and unseen of learning through the metaphor of fields seemed quite logical in my head, but I have struggled to actually illustrate this. Therefore, I’m exploring whether a more abstract Lupi & Posavec-esque representation may prove simpler.

Starting to experiment with a more abstract visualisation of ‘learning’ data, highlighting the distinction between data that is born digital versus data that is created in an analogue form.

As I start to develop my ‘learning’ data visualisation, I was able to show both the type of learning activity I was engaging in, along with whether that activity exists online or not. However, this is only scraping the surface in terms of the complexity of recording learning data. Signifying that data is produced/exists online means it’s trackable, but who sees that data deters whether this is useful for learning. For example, I often use OneNote to make notes and draft ideas – and Edinburgh University provide me with access to this application. However, as I use Microsoft 365 as part of my job, that activity and note taking typically happens on my work’s tenancy, whether that is the right thing to do or not. If a learning analytics platform wanted to “capture” my learning activity, that would prevent that information being recorded and analysed.

This can be even further removed when that idea formulation happens on completely unsupported platforms that are solely the choice of me (the learner) for example using WhatsApp for informal discussions with peers outside of the VLE.

This is the challenge I’m wrestling with of how best to visualise the complexity of the seen and unseen in learning.