Categories
Conclusion Week 12

Final Reflections

Introduction

At the start of the the module we discussed the very fundamental question of what are ‘data’? I have to confess that this at first seemed very trivial, but as we explored this through the first tutorial and subsequent Moodle discussion, it became apparent that there was some important questions to address. Initially I thought of data as having both a raw form and an analysed form. However, it became evident to me that the term raw data is somewhat of an oxymoron (Gitelman and Jackson, 2013), as it’s never truly ‘raw’.

Williamson (2017) explains how all data is inherently partial, selective and representative. This is something seemed obvious when considering smaller-scale data like surveys and focus groups due to logistics, but something that I found more alarming when considering large digital data sets where decisions over what is recorded and how is baked into products. It became apparent that taking the time at the start of the module to consider the semantics of ‘raw data’ was an important exercise. I can certainly now relate to why Kitchen (2014, as cited in Williamson, 2017) would suggest that a more accurate word to refer to data is ‘capta’, highlighting that it’s taken, not given.

Learning with Data

In this first block, we focussed on student ‘learning’, and how this is increasingly shaped by data-driven technologies and practices. The main area of focus was on the discourses around personalised learning. A topic Eynon (2015) stresses is in need of a critical approach by researchers. Much of the affordances and pitfalls around personalised learning claims was summarised in the paper by Yi-Shan, Perrotta & Gasevic (2019) that discussed the tensions between enhancing students’ control of their learning and, at the same time, compromising their autonomy.

For my visualisation, I reflected on my own experiences of how I studied over the three week block. I was able to highlight the aspects of learning that can’t be easily recorded through learning analytics platforms, such as activities that happen offline, or in personal apps that do not feed into to any school of university performance dashboard. This activity made me reflect on how the promises of data driven learning practices are at best premature and at worst fraught with danger, despite the bullish attempts at datafication by technology providers. I also discovered that the measures reported by such systems are not accurate representation of learning, but instead only proxies at best. However, as Bulger (2016: 16) states, ‘this gap is not made explicit’.

Teaching with Data

In this block the literature and subsequent discussions focussed on the increased use of data in education and its potential effects on teaching. With large scale platformization of education well underway, this giving more power to data over human judgment and is limiting teacher autonomy (Van Dijk, Poell & de Waal, 2018). Raffaghelli and Stewart (2020) highlight how teachers are also being trained do more with data, rather than be critical of it.

My visualisation for this block focussed on push notifications over a 24-hour period, highlighting the dangers of an uncritical approach to data. Illustrating the vast amount of personalised data sent to me over one day, I was overwhelmed by what I received. Imagining this through the lens of education, it became evident that there would always be more data than time to interpret it, let alone critique it. Conducting this visual exercise made me realise that teachers have a real challenge ahead in maintaining autonomy over pedagogies and personal values as these don’t easily reduce down to data. It seems that the ‘good teacher’ is increasingly being defined as one that is familiar with their data and responsive to it. Harrison et al., (2020: 405).

Governing with Data

The final block on governing with data focussed predominantly on how institutions and educational practices are shaped by the growing datafication of educational governance. Key themes included the concept of ‘accountability’ as a way for schools, universities, and teachers to evidence effectiveness through large data sets. Aligned with this was discussions of ‘performativity’, that sees a change in practices to maximise the opportunities for performance measures.

For my visualisation I used illustrated how my devices are connected by data through different levels of integration. And technology choice is getting more difficult as I invest more and more data into ecosystems. I used this as a proxy to argue how when scaled up to educational institutions, these choices can have more profound effects, as data’s strategic and logistic importance gives rise to ‘Informative Power’ that can influence big educational decisions (Anagnostopoulos, Rutledge,& Jacobsen, R, 2013: 11).

Settling on the idea for this final block’s visualisation was particularly difficult, as I felt most of the literature referenced primary and tertiary education e.g. school inspection processed (Ozga, 2016), test based accountability (Anagnostopoulos, Rutledge,& Jacobsen, R, 2013), or data mis(use) in public education (Fontaine, 2016). As someone who has spent most of my career in Higher Education, I found it difficult to relate to some of the ideas in the same way I had with the two previous blocks. Through my visualisation I wanted to highlight issues as I saw them and for me the most logical way was to start with a simple example that could be understood on a small scale, so it could be used to help understand the issues on an institutional, regional, national and supranational scale. My visualisation therefore ended up more of a network map, rather than the result of accumulative data collection as in previous blocks.

Closing Thoughts

I have thoroughly enjoyed this reflective exercise of combining hand drawn visualisations with blog entries. The creative process of hand drawing data inspired by ‘Dear Data’ (Lupi and Posavec’s, 2016) allowed me to express ideas that would have been very difficult through text alone.

References

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.

Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Eynon, R. (2015) The quantified self for learning: critical questions for education, Learning, Media and Technology, 40:4, 407-411, DOI: 10.1080/17439884.2015.1100797

Gitelman, L. and Jackson, V. (2013) “‘raw data’ is an oxymoron edited by Lisa Gitelman. The MIT Press, Cambridge, MA, U.S.A., 2013. 208 pp., illus. paper. ISBN: 9780262518284,” Leonardo, 47(3), pp. 303–304. Available at: https://doi.org/10.1162/leon_r_00792.

Harrison, M.J., Davies, C., Bell, H., Goodley, C., Fox, S & Downing, B. 2020. (Un)teaching the ‘datafied student subject’: perspectives from an education-based masters in an English universityTeaching in Higher Education, 25:4, 401-417, DOI: 10.1080/13562517.2019.1698541

Lupi, G. and Posavec, S. (2016) Data postcards, Dear Data. Available at: http://www.dear-data.com/all (Accessed: November 4, 2022).

Ozga, J. 2016. Trust in numbers? Digital Education Governance and the inspection process. European Educational Research Journal, 15(1) pp.69-81

Raffaghelli, J.E. & Stewart, B. 2020. Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: a systematic review of the literatureTeaching in Higher Education, 25:4, 435-455, DOI: 10.1080/13562517.2019.1696301

Tsai, Y-S. Perrotta, C. & Gašević, D. 2020. Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics, Assessment & Evaluation in Higher Education, 45:4, 554-567, DOI: 10.1080/02602938.2019.1676396

Van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

Williamson, B. 2017. Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Big Data in Education: The digital future of learning, policy and practice. Sage.

Categories
Block 3: Governing with Data Week 11

Block 3 Visualisation: The Procurement of Technology and its Governing Effects for Education

Hand drawn visualisation showing the relationship between smart devices connected by data and software ecosystems.
The final visualisation showing the personal devices I typically use along with how they’re connected to each other through data.

Both in my job and in my personal life I use a lot of technology and there are several devices that are at the core of this, including a laptop, smartphone, tablet, smartwatch and headphones which I’ve acquired over several years. Initially, decisions over which items to buy were heavily influenced by metric data in the form of price, features, pros and cons, and review scores, as well as personal preference. More recently however, as devices need updating or a phone contract comes to an end, the decision over which brand to invest in is getting more complex due to the investments I’ve already made.

Depending on which device I’m using often controls where that data is being stored. This can be anything from bookmarks and saved passwords, to backups of photos and cloud hosted documents. As I’ve started to invest more into the Google ecosystem over the past few years, decisions over buying a new piece of technology started to become governed my somewhat unconscious decisions I made years prior.

Owning devices by the same manufacturer typically gives me enhanced functionality and ease of use through a seamless integration as they’re part of the same technological ecosystem. While the choice of ecosystem I invest in for personal use largely only affects me, this is a very different when you consider this scaled up to education. When schools and universities procure technology, the implications for its users can have far more profound effects.

Within education, decisions over which technologies to invest in are not solely based upon institutions’ own requirements, as it would be for an individual using personal devices. Instead, this is often heavily influenced by technology providers through the essentialist view that assumes the technology being procured embodies their own pedagogical principles (Hamilton and Friesen, 2013). This places BigTech companies and EdTech providers in a very powerful position within education; a sector that has previously been resistant to attempts at datafication. However in recent years, perhaps with the Covid-19 pandemic serving as a catalyst, there appears to have been a step-change in the volume of data being shared with technology providers.

With VLEs and Student Information Systems capturing data on a large scale, this is giving rise to what Anagnostopoulos (2013: 11) refers to as ‘Informative Power’. In this scenario, the more schools and universities invest their data, the more this data is relied upon to make strategic decisions. This has led to the decision-making in education being decentralised from governments, schools and universities, to complex networks of data in what (Ozga et al, as cited in Williamson, 2017) as the ‘governance turn’. This in itself may not seem immediately concerning, but the software which collects this data is often designed without direct involvement with education.

Technology providers typically decide how their platforms should be used within education, without direct involvement from schools and universities. This may be in part due to education’s democratic approach to decision making, that is likely to be seen as incredibly slow and inefficient to technology companies that are used to working at speed and seeing all problems as solvable through algorithms. With technology providers increasingly working with governments and viewing themselves as thought leaders for schools and universities we’re also starting to see the development ‘fast-policies’ that are less informed by evidence-based practices and more by ‘what works’ or ‘best practice’ (Williamson, 2017: 68).

While there is now an abundance of technology available for education, the level of control given to educators has arguably been compromised. Software is now typically licensed on a SaaS (Software as a Service) basis, where the technology companies provide the software, the servers that store the data, and regular updates that institutions have very little (if any) control over. This results in schools, universities, and teachers having to change their practices to justify their worth through performance measures in what Williamson (2017: 75) refers to as ‘performativity’. Coupled with this is the concept of ‘accountability’, where the evidence of this apparent effectiveness can also be evidenced through large data sets.

With a focus on performance metrics at seemingly every level from student, to teacher, to school or university, EdTech platforms are positioning themselves as being able to help define both the problem and also provide the solution. The modern day ‘centres of calculation’ arguably happening now in schools, made possible through the software tools that are provided to education to facilitate the large scale harvesting of personal data (Latour, 1986, as cited in Williamson, 2017).

Of course not all educational institutions buy into off-the-shelf digital dashboard solutions. It’s common for wealthier institutions to resource in-house teams who can scrutinise how their data is being used and then build data-layers and reports from the ground-up based upon their needs. This is however typical of the more privileged western institutions such as Russell Group and Ivy League universities. On the other side of the spectrum, there underfunded schools who largely buy all-in-one solutions from the likes of Google and Microsoft. At the extreme end of the scale are ‘Global South’ institutions which is a term coined by Toshkov (2018) to describe anything from the poor and less-developed to oppressed and powerless.

Global South institutions as argued by Prinsloo (2020) don’t have the political infrastructures or financial resources in place to challenge datafication. Due to their oppressed situation, which has in no means been helped by their Global North counterparts, South African politicians and policymakers to are highly susceptible to the west’s ‘data gaze’ (Beer 2019, as cited in Prinsloo, 2020) with its promise of technology that is speedy, accessible, revealing, and gives a 360 degree view.

In this final visualisation I’ve attempted to illustrate how technology providers are deliberately designing solutions that rely on the large scale use of data. Whether it’s a personal device such as a smartphone, or a VLE, the premise is the same – to provide a very low and frictionless point of entry, but through accelerated datafication, it’s very impractical to get out of. Individuals and educational institutions alike are increasing their reliance on data, and rather worryingly local pedagogical practices that have previously been a hurdle are being abandoned to accommodate the functionality embedded in technology and representable through data.

Bibliography

Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. 2013. Introduction: Mapping the Information Infrastructure of Accountability. In, Anagnostopoulos, D., Rutledge, S.A. & Jacobsen, R. (Eds.) The Infrastructure of Accountability: Data use and the transformation of American education.

Hamilton, E. and Friesen, N., 2013. “Online Education: A Science And Technology Studies Perspective / Éducation En Ligne: Perspective Des Études En Science Et Technologie” By Edward C. Hamilton.

Prinsloo, P. 2020. Data frontiers and frontiers of power in (higher) education: a view of/from the Global SouthTeaching in Higher Education, 25(4) pp.366-383

Williamson, B. Digital Education Governance: political analytics, performativity and accountability. Chapter 4 in Big Data in Education: The digital future of learning, policy and practice. Sage.

Categories
Block 2: Teaching with Data Introduction Week 8

Block 2 Visualisation: Teaching with Data

A visualisation that shows the push notifications sent to an individual persons smartphone over a single day.

My visualisation for Block 2 ‘Teaching with Data’ stems from Raffaghelli and Stewart’s (2020) recommendation to expand data literacy to include critical, ethical and personal dimensions as well as technical proficiency. Raffaghelli and Stewart explain that most of the focus in education so far has been on training teachers to be more data literate, and to make more use of abundance of data available to them. Taking this concept on face-value, I chose to experiment with an uncritical acceptance of digital data for 24 hours. For one day, I enabled every available push notification on my smartphone and recorded what I received. Within this visualisation, I also attempted to categorise the notifications into those that were purely information based, those that were attempts to pull me back into the app, and those that were direct targeted ads. The result was a very disruptive and frustrating day, as I frequently dismissed notifications that were largely irrelevant.

While this 24 hour experiment is a tenuous and somewhat crude analogy of how teachers are affected by datafication, it allowed me to surface contentious discussion points often raised by educators. Through experience and training, teachers continually strive to develop and improve their practice. They will often draw upon the softer skills of teaching such as humour, empathy, and personal judgement to engage students in learning. However, in a world where decisions are increasingly informed by big data sets, these autonomous teaching practices can be seen as ineffective and inefficient. As stated by (Van Dijk, Poell & de Waal, 2018), “Datafication and personalization are pushed as the mantras of a new educational paradigm where human judgment is increasingly replaced by a product of predictive analytics”.

The use of digital technology in education seems to have gone well beyond our supplementary ideals. Driven by rose-tinted views of the potential of big data, educational infrastructure has largely been moved into the cloud and teaching and learning practices are increasingly being shaped by the feature sets of the technology available. As summarised by  Harrison et al., (2020: 402) “It seems we cannot now think or experience education without thinking or experiencing data”.

So how does my visualisation relate to the teaching with data? While as an individual, I have a lot of autonomy over what data I choose to use and engage with, this isn’t quite the same in education. The way that schools operate – and in turn teachers teach – is increasingly becoming a reactive process in response to metrics such as university league tables and student feedback. This is referred to by the term ‘performativity’ by Harrison et al., (2020). Whilst not direct measures of teaching, these learner focussed metrics are then frequently used as proxies of effective teaching. So in the case of my notifications dashboard, dismissing or turning off these notifications could be seen as bad practice, or at least not using data to its potential. More desirable action would be to look at the notifications through an essentialist perspective, believing that there’s lots of potential there that I should be able to utilise if I invest more time in the ecosystem.

There is a danger that in order to respond to these metrics in a way that scales, teachers will be put under pressure by their institutions to change their pedagogies to ensure everything fits on the digital platforms that are required to generate the data demanded to assess learning. (Williamson, Bayne & Shay, 2020). It appears that the argument against teacher autonomy is not only compromised at an institutional level, but also at a national level as governments are also endorsing platformisation whilst ignoring academic autonomy (Van Dijk, Poell & de Waal, 2018).

What is needed is a more critical view of datafication, where teachers are encouraged to challenge assumptions made by predictive analytics and learning dashboards. This should include more transparency over big data practices and the risks and implications associated with it, which can in turn empower more responsible use of technology in future (Sander, 2020).

Throughout my education, some of the most influential teachers were those who had strong personal values, used humour, and at times probably used unorthodox pedagogies to engage students. However, it now seems that these personal values are being lost to as they don’t easily reduce down to data entities or scale up to repeatable practices that can be enacted through digital platforms. It now seems that the ‘good teacher’ is defined as one that is familiar with their data and responsive to it. Harrison et al., (2020: 405).

Bibliography

Harrison, M.J., Davies, C., Bell, H., Goodley, C., Fox, S & Downing, B. 2020. (Un)teaching the ‘datafied student subject’: perspectives from an education-based masters in an English universityTeaching in Higher Education, 25:4, 401-417, DOI: 10.1080/13562517.2019.1698541

Raffaghelli, J.E. & Stewart, B. 2020. Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: a systematic review of the literatureTeaching in Higher Education, 25:4, 435-455, DOI: 10.1080/13562517.2019.1696301

Sander, I. 2020. What is critical big data literacy and how can it be implemented? Internet Policy Review. 9(2) DOI: 10.14763/2020.2.1479

van Dijck, J., Poell, T., & de Waal, M. 2018. Chapter 6: Education, In The Platform Society, Oxford University Press

Williamson, B. Bayne, S. Shay, S. 2020. The datafication of teaching in Higher Education: critical issues and perspectivesTeaching in Higher Education. 25(4), pp. 351-365.

Categories
Block 1: Learning with Data Week 5

Block 1 Visualisation: Learning with Data

Note – sorry to publish this so late in the week.

Below this text you will see my final data visualisation for Block 1: ‘Learning With Data’. Beneath the visualisation, I have provided a reflection to explain my thought process and the themes I have tried to convey on the topic of learning with data.

My final visualisation for this block attempts to capture the complexity of learning data, focussing on distinctions of what can and can’t be captured by digital platforms. In the vein of Lupi and Posavec (2016) I created a legend beneath my visualisation that helps to explain what each of the abstract icons, shapes, and colours represent in the visualisation.

Reflections

My visualisation documents how I’ve engaged in this block’s learning activities over the past three weeks. In contrast to the limitations of learning analytics platforms, I’ve attempted to also record the activities that happen outside of institutionally licensed platforms, such as how unsupported personal resources are used. This includes both digital e.g. WhatsApp and nondigital e.g. drawing, or spoken conversations.

According to technology companies and government think-tanks, education has a serious problem. In an age where almost everything is personalised based on big data sets and content recommendation algorithms, education has largely resisted this technosolutionistic approach. This has led to accusations from the ‘LAW Report’ that educational institutions are ‘driving blind’ (Friesen, 2019) by not exploiting the apparent potential of big data.

However, as someone who both works in – and is a student in – higher education, I wanted to illustrate through documenting my learning practices how learning doesn’t just happen in digital platforms. Nor does it happen within the walls (physical or virtual) of a school or university. Learning happens in many different spaces and any personalised learning technology that is solely dependent on digital data risks disregarding human factors and the socio-cultural contexts in which the data is generated (Perrotta, 2013, cited in Yi-Shan, Perrotta & Gasevic, 2020: 555).

There are many ways in which one can engage in learning, such as reading, note-taking, watching videos, or having discussions with others. I recorded these learning engagements in my visualisation through the use of simplified icons. Reflecting on this visualisation now, a glaring omission I see is the act of ‘thinking’. More than anything else during this block, I spent by far the most time wrangling with ideas in my head for my visualisation and trying to synthesise concepts from different reading with my learning data. What I wanted to convey about through this visualisation, is that only I (the individual/the student) have a complete picture of how I learn. And there are a lot of activities that contribute to my learning that go unseen to both teachers and technology companies.

I chose to make three distinctions between the way my data activities were recorded. At the top of the visualisation, illustrated by the small green rectangles are activities that are authored in a digital form, using software that is licensed and/or controlled by Edinburgh University. This includes the likes of the WordPress website, Moodle, and it’s connected technologies such as the Library reading list. All of my interactions in these environments can be potentially be seen by staff at Edinburgh University and in theory a profile can be built up of my ‘learning’ based upon engagements that happen within these tools. This is the space in which technology companies want to occupy and expand.

Being used to recording data in digital form, I still made an initial spreadsheet to keep track of my learning interactions.

However, there is a lot of activity and communication that happens outside of this centrally-supported space, but still within digital tools. The is the next category I defined through the use of brown rectangles. This may include technologies that tutors can see, such as Twitter, but in fact a lot of this data is generated in tools that the university doesn’t have sight of. For instance, discussions I had in WhatsApp and over Teams with my colleagues at work, or digital notes using personal apps e.g. OneNote, or digital note-taking apps.

The final category represented by grey squares are those learning interactions that happen entirely offline, unseen by both the University and EdTech companies. This includes activities such as sketching my visualisations, handwritten notes, and having verbal discussions with work colleagues relating to the course. The three colours; green, brown and grey were actually borrowed from an earlier abandoned idea of representing this data as an aerial view of farmers fields, playing on the visual metaphor of data harvesting. The idea here being that the easiest data to see and manipulate is that at the surface, but there is a lot of data beyond reach and some that is almost impossible to get to. The idea here is that while I have a view of my learning data, the platforms that I use only see the interactions I make within their environments.

Alongside each activity icon, I placed a coloured dot representing the actors that see these learning interactions. For data that is authored online in an open environment like the WordPress website, this could be seen by many people including the University, tech companies e.g. hosting, my fellow peers, and at a surface-level the public. At the other end of the scale, all of the interactions that happened offline can only be seen by me.

In bringing all of these ideas together, I have attempted to illustrate that personalised learning solutions are flawed in that they can only see a partial representation of my (a student’s) activity. And this is dangerous, as this limitation doesn’t appear to be deterring tech companies from still trying to apply the solutions that recommend purchases on Amazon or movies to watch on Netflix to education (Bulger, 2016: 2).

For BigTech companies, the frustration here is that learning appears to be messy and spread out across different environments. It’s not what software engineers are used to, who would typically want to apply an algorithmic solution to such problem. It may therefore not come as a surprise that EdTech companies want to remove the likelihood of learning activities taking place in these third-party and offline spaces. Instead they’d like to provide the entire data and learning infrastructure for education so they have a panoramic view of students’ learning and can provide more opportunities to make money from education which is seen as a largely untapped space. However some attempts to provide not only the software but also the hardware for education has surfaced some quite concerning ethical issues, as evidenced when Google were sued for collecting student data through Chromebooks. An allegation they denied, but settled for a mere $170m.

Such reductionist views of learning that force students into practices approved by technology companies inevitably remove student agency and practices outside of the digital ecosystem would be seen as undesirable. As stated by Eynon (2015) data-centric approaches to learning would not be aware of broader social settings, which increases the likelihood that those who aren’t “performing” as well would be written off as a problem. Whereas the reality is they’re just not spending as much time online as others. This is at odds with self-determination theory that posits “students need autonomy (belief that they have choice and independence in identifying and pursuing goals)” (Bulger, 2016: 13).

In my visualisation and this short reflection, I’ve only been able to scrape the surface of learning with data, but hopefully I have been able to communicate that learning is fragmented and happens in many different spaces for which connections do not always exist. There is a broader perspective of learning offered here, that whilst only being surface-level data, arguably creates a more comprehensive picture of learning than offered by any analytics dashboard or personalised learning solution.

Bibliography

Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. Data & Society working paper. Available: https://datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Eynon, R. (2015) The quantified self for learning: critical questions for education, Learning, Media and Technology, 40:4, 407-411, DOI: 10.1080/17439884.2015.1100797

Friesen, N. 2019. “The technological imaginary in education, or: Myth and enlightenment in ‘Personalised Learning.” In M. Stocchetti (Ed.), The digital age and its discontents. University of Helsinki Press.

Statt, Nick. ‘Google Sued by New Mexico Attorney General for Collecting Student Data through Chromebooks’. The Verge, 20 Feb. 2020, https://www.theverge.com/2020/2/20/21145698/google-student-privacy-lawsuit-education-schools-chromebooks-new-mexico-balderas.

Tsai, Y-S. Perrotta, C. & Gašević, D. 2020. Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics, Assessment & Evaluation in Higher Education, 45:4, 554-567, DOI: 10.1080/02602938.2019.1676396

Categories
Block 1: Learning with Data Week 4

Ideas for Learning with Data Visualisation

Reflecting on my initial ideas

Throughout this block so far, the reading has encouraged me to reflect upon my own learning and ponder how learning analytics solutions may have interpreted me so far based upon my trace data (as in the data that can be traced through clicks and time-spent on site pages). What sort of profile would I have according to a digital platform and what kind of content would be recommended to me based on this profile?

In connecting this to the activity of creating a hand-drawn data visualisation, I want to explore whether I can visualise the ways in which I learn, through my actions. By recording this data in analogue form from my personal viewpoint, I can build a more holistic view of my actions and activities than any digital learning platform could ever claim to.

The seen vs the unseen (in learning)

If one thing is clear to me, it’s that learning is a very complex and messy subject that doesn’t fit neatly into data tables. Learning doesn’t sit within the confines of a school, college, or university; nor does sit exclusively within a VLE. It happens in lots of different places that are unique to each individual learner.

Photo by Ricardo Viana on Unsplash

If I think about how I learn, I spend an awful lot of time thinking, reading, and noting ideas down. Most of this will be go unseen by a learn analytics platform. So how would such systems deal with this? Would I be sent push notifications to tell me I’m falling behind? Would a flag appear beside my name as needing some form of intervention?

Any interactions I make ‘online’ will no doubt be recorded, but where those interactions take place determines who sees this data. A lot of that won’t be the university. For instance the university could see that I saw a notification, I authored a blog post, I contributed to a discussion, or clicked on an item in the reading list. But what about when I’m searching the web for ideas, trying to connect concepts, reading ‘offline’, having discussions with friends, family and even peers outside of the VLE. This will go unseen.

Ideas for a visualisation

I attempted to list the activities I engage in that all contribute to my learning. Note some activities happen ‘offline’ and some exist as idea formulation that are not documented anywhere.

I want to explore whether I can document each learning interaction I make throughout a given week. This will include the seen (let’s assume that’s anything digital) and the unseen which is not only learning that happens in analogue form (perhaps ‘offline’) also those activities that are online but happen within websites and apps that are personal to me and not seen by the university and its licensed technologies. I want to emphasise the divide between the seen and the unseen in this landscape as a vehicle to highlight the limitations of any proposed learning analytics or personal tutor type system and I’m exploring ways in which I can illustrate this for my visualisation.

I started thinking about the data I produce and how where that’s authored depends on who sees it.

Learning fields

Aerial view of fields

Photo by Peter Ford on Unsplash

I have an idea that will be a visual play on the metaphor of data harvesting. This will take the form of hand-drawn data lines that happen within silos (like farmer’s fields). Here, I’m wanting to emphasise the division between the data by using each field as a different data provider. One shape may be university servers, another licensed vendors like Microsoft, then there’s the companies that I (students) interact with outside of the university’s provision e.g. Google, Amazon Web Services, app providers, and so on. There’s ‘fields’ that more than one actor can see, for instance activity that happens within a licensed service like Microsoft could be seen by both the University staff as well as Microsoft themselves.

Then going a step further there’s the interactions that happen offline, like conversations with family and friends, reading a paper that’s been downloaded, or making notes on a locally installed text editor. You could argue this aspect of learning isn’t recorded at all. You could perhaps only record the output through an assessment for example.

Abstract representation

The concept of the seen and unseen of learning through the metaphor of fields seemed quite logical in my head, but I have struggled to actually illustrate this. Therefore, I’m exploring whether a more abstract Lupi & Posavec-esque representation may prove simpler.

Starting to experiment with a more abstract visualisation of ‘learning’ data, highlighting the distinction between data that is born digital versus data that is created in an analogue form.

As I start to develop my ‘learning’ data visualisation, I was able to show both the type of learning activity I was engaging in, along with whether that activity exists online or not. However, this is only scraping the surface in terms of the complexity of recording learning data. Signifying that data is produced/exists online means it’s trackable, but who sees that data deters whether this is useful for learning. For example, I often use OneNote to make notes and draft ideas – and Edinburgh University provide me with access to this application. However, as I use Microsoft 365 as part of my job, that activity and note taking typically happens on my work’s tenancy, whether that is the right thing to do or not. If a learning analytics platform wanted to “capture” my learning activity, that would prevent that information being recorded and analysed.

This can be even further removed when that idea formulation happens on completely unsupported platforms that are solely the choice of me (the learner) for example using WhatsApp for informal discussions with peers outside of the VLE.

This is the challenge I’m wrestling with of how best to visualise the complexity of the seen and unseen in learning.

Categories
Introduction

Introductory Post: Reasons, Reflections & Hopes for CDE

As I reach the end of the introductory section of this module ‘Critical Data and Education’, it provides me with an opportune moment to reflect on my reasons for studying this course; what I hope to achieve at the end; and what particular aspects of the topic of ‘data’ interest me most. In this short post, I’ll

Reasons for studying the course

I started studying the MSc Digital Education at September 2020 and since my first course ‘IDEL’, I’ve been drawn to the topic of data. When the courses have given me the flexibility to explore this area, I have done so. In my IDEL essay I looked at ‘Who Benefits from Digital Education?‘ where I took a critical view of the role of BigTech companies and their unrelenting attempts to infiltrate education during the pandemic. I continued to explore data in other courses and most recently looked at whether teachers can play a bigger role in the development of educational technologies.

In my assignment ‘the sociomaterial dilemma’ for the course for Education, Data & Culture, I explored whether teachers can play a bigger role in the development of educational technologies.

My last module was actually Research Methods, but my reason for studying this earlier than recommended, was that I wanted to study Critical Data and Education, as this is a course that I believe will play a key role in helping me define my dissertation topic and question. Up until this point, I have looked at critical perspectives of data on a fairly macro level, largely focussing on BigTech companies. Whilst hugely relevant, these perspectives have always felt too vast for a dissertation and I honestly wouldn’t know where to start. Through studying this module I hope to sharpen my focus and identify an area of critical data and education that I can explore in my dissertation.

What I hope to achieve by the end of the course

I’ve probably alluded to this in the paragraph above, but through studying this course I hope to achieve two things. Firstly, I hope to delve deeper into the topic of critical data. In the past I’ve been drawn to the topic of critical data, but it’s sometimes been just one element of a larger topic. In this course, I can focus on critical data practices over a longer period of time and in more depth than previously afforded.

Through focussing on critical data over 12 weeks, my second hope is to come away with a clear – or at least clearer – understanding of my dissertation question, which I’m certain will be on the topic of critical data in Higher Education.

What particular aspects of the topic of ‘data’ interest me most?

Coming from a background as a Learning Technologist, I feel well-versed in speaking to people about the affordances of digital technologies. However, prior to starting the MSc in Digital Education, I realise I probably only had a fairly surface-level appreciation of how data is being used through these technologies. I guess I was focussed on the bits people see – the ‘front-end’ as developers would say.

I’m really interested in a number of different areas from the practices and malpractices of Big-Tech and EdTech companies in education, and their predominant focus on decision makers and students in favour of academic staff.

On the institutional side of things, I am also interested in the more local practices that happen within schools colleges and universities. On this topic, I’m interested in the challenges HE are facing now with resourcing people who can contribute to such discussions of datafication. Whilst we want academics, researchers and digital education professionals in these discussions, it’s not easy to resource or support this, with increasing demands such as higher student numbers. Often the focus is on expansion which results in more of a dependency on choosing off-the-shelf cloud-based technologies and solutions. In an ideal way institutions would first establish clear strategy, policy and workflows for data use first and foremost. However in practice this is not often the case and decision-makers are listening to the voices of external EdTech companies who claim increased efficiencies, scalability and automation.