Children’s engagement with digital media

Along with its Nordic neighbours, Denmark is one of the most digitalised countries in Europe, according to DESI (The Digital Economy and Society Index) – as tracked by the European Commission – and Danish children have a record high use and consumption of digital media compared to other European countries (WHO, 2016).

Perhaps consequently, the public debate on screen time and the effects of digital technologies on children is ongoing and contentious. While abovementioned figures indicate that screens and digital media are a major part of childhoods in Denmark and that access to digital media and ICTs is widespread, new reports point to potential underlying inequalities in children’s use of digital technology.

Earlier this year, the Danish NGO Børns Vilkår published four reports on Danish children’s experiences and engagement with digital media. One report focused on children of 5-6 years old, based on interviews as well as surveys with kids, parents and kindergarten teachers. 96% of the children in the survey knew how to use a smartphone or a tablet which emphasises that Danish children have access to digital technologies from an early age (Alim et al, 2019).

We tend to think of screen time as a sedentary activity, but with mobile devices and tablets, it doesn’t necessarily have to be that way. With this in mind, the report sheds light on different ways that Danish children use tablets, such as e.g. taking pictures, filming, controlling robots and watching videos – mainly to show that ‘screen time’ can sometimes be a misleading and generic term that doesn’t pay attention to the different activities and forms of play that digital technologies enable. I do find this approach to media use interesting as it allows for a more nuanced understanding of how children – and adults – experience digital technologies, rather than the moral panic over screen time that is sometimes evident in these debates.

Anyway, I wasn’t meant to talk about adults panicking over kids staring at a screen. Back to the report!

When looking at screen use as containing multiple activities, some interesting disparities emerge. For example, the data shows that more boys than girls experiment with digital technologies in multiple ways (measured as to whether they use devices in at least four different ways), and girls are more likely to watch videos on their tablets while boys are more likely to use the tablet to control robots (Alim et al, 2019). Slightly sad and predictable…

Further, the report looks at the effect of parental involvement in children’s screen use and parents who see themselves as ‘digitally competent’ enable their children to engage with devices in more varied, creative and active ways, rather than as a small TV screen. Perhaps more surprisingly, the parents who are confident using digital technologies themselves also tend to be more strict when it comes to laying down rules for screen time in the family and facilitate their children’s media use. In this way, digital know-how and media habits are being passed on from parents to their children from a very young age, and the report concludes that not all Danish children have equal opportunities to develop digital skills (Alim et al, 2019).

However, the notion of ‘digital dannelse’ is a hot topic in Denmark at the moment – ‘dannelse’ here roughly relates to education, but also encompasses more philosophical and cultural aspects of digital life – and schools and kindergartens are now paying more attention to how they can ensure that kids grow up with a technical as well as ethical digital compass (Bundsgaard, 2017; Mehlsen & Hendricks, 2018).

A great (and adorable) example of this – although from Norway and not Denmark – is the video below on consent in relation to taking and sharing pictures of others. The video is called “YOU’RE the one who decides” and features a bunch of Norwegian kids singing and asking permission to take pictures of each other. Seriously, watch it, it’s really catchy.



Alim, W., Nebelong, S., Møller Kjeldsen, C., Bindslev With, M., Kristensen Fahnøe, P. M. & Johansen, S. 2019.  Børnehavebørns hverdag med digitale medier: Digital Dannelse i Børnehøjde, del 1. Børns Vilkår.

Bundsgaard, J. 2017. Digital Dannelse. Aarhus: Aarhus Universitetsforlag.

Johansen, S. L. 2014. Børns liv og leg med medier. 1. udgave, 1. oplag ed., Pædagogik og læring. Frederikshavn: Dafolo.

Mehlsen, C. & Hendricks, V. 2018. Hvordan bliver vi digitalt dannede? København: Informations Forlag.

World Health Organization (2016). Growing up unequal: gender and socioeconomic differences in young people’s health and well-being.

The Digital Welfare State – a neoliberal Trojan Horse?

A recent UN report examines the emergence and implications of the so-called ‘digital welfare state’ – that is, the ways that systems of social protection and welfare services are now increasingly driven by digital data and facilitated by technologies that automate, predict, identify, surveil and punish citizens (Alston, 2019).

In the report, Special Rapporteur on extreme poverty and human rights, Philip Alston, calls attention to this aspect of digital governance from a human rights perspective. The report is based on submissions from governments, civil society organisations, academics and human rights institutions. Alston is highly critical of the current uses of digital technologies in the welfare state and points to some disturbing issues on a global scale – similar to some of the issues Virginia Eubanks highlights in an American context (Eubanks, 2018).

One of Alston’s main points is that private corporations and tech companies are increasingly involved in designing and shaping the modern welfare state through software programmes. These companies, however, are often ideologically against regulation and, according to Alston, tech companies operate in a “human rights free zone” (Alston, 2019: 13). In spite of this, governments worldwide are allowing corporations significant power over the digital welfare state as well as access to citizens’ data – without the necessary transparency and legal accountability in place.

One of the reasons for governments being so open to this development can be the allure of cutting-edge technology: Do you want to ensure a (cost-) effective, innovative welfare state, with citizens being able to take advantage of new digital technologies that ease their contact with social services – oh, and at the same time this may allow you to reduce your overall budget for welfare?

Who can say no to that? No one wants to seem old-fashioned apparently, so many governments are actively pushing for a digital transformation.

What could go wrong…

Well, according to Alston’s report, we should be concerned as the digitisation of welfare services is often accompanied by reductions in the overall welfare budget, eliminations of other services, stronger sanctions regimes and more demanding conditions in order to obtain social protection (Alston, 2019). Further, the report highlights incidents where decisions to ‘go digital’ have been made without sufficient policy discussions, as digitisation is seen as an administrative matter rather than a political decision (Alston, 2019).

In this way, digital transformations of the welfare state are implemented in the name of efficiency, suggesting that automated data-based decisions are better at ensuring that the help goes to those who need it the most and that such digital systems will minimise the risk of benefit fraud (Alston, 2019). However, the introduction of digital technology in public services seems to go hand in hand with neoliberal initiatives of privatisation, outsourcing and austerity measures in the welfare state.

Consider this a teaser for my dissertation (hopefully!).


Alston, P. 2019. Report of the Special Rapporteur on extreme poverty and human rights. United Nations General Assembly, A/74/493.

Eubanks, V., 2018. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.


Is fast internet a social right?

It’s election time here in the UK and a couple of weeks ago, Labour put internet access on the political agenda. The party launched a plan to provide free full-fibre broadband to “every home and business in the UK” by nationalising parts of BT and taxing big tech companies such as Facebook and Google.

The idea has caused some controversy with Boris Johnson calling it a “crazed communist scheme”, and industry spokespeople from BT questioning the party’s estimated cost of rolling out fibre broadband to every household by 2030.

As I am not exactly an expert on the landscape of broadband providers in the UK, I will not go into details of how this might work, the possible drawbacks and whether the estimated costs are realistic. What I find interesting, though, is how Labour have presented this issue as a question of equality. Jeremy Corbyn himself said that universal rapid broadband: “must be a public service, bringing communities together with equal access in an inclusive and connected society”.

The video above again underlines issues of inequality of access in Britain today, explaining how the roll out would begin in rural and remote communities where broadband access is weak.

In my opinion, this rhetoric of access and inequality is interesting to look at in light of sociological research on the digital divide. Where debates on the digital divide in the late 90s framed it as simply a technological question of formally connecting the disconnected ‘not haves’, sociological scholars in the early 2000s highlighted the social and political aspects of the digital divide (Van Dijk, 2005; Selwyn, 2004). The arguments were mainly that the digital divide followed other patterns of socio-economic inequality – such as age, gender, ethnicity and geography – and bridging the divide would be more complex than just granting wider access to ICTs (information and communications technologies).

An example is Neil Selwyn’s (2004) more nuanced understanding where he distinguishes between four stages of the digital divide:

  1. Formal access to ICTs: where individuals in theory have access to digital technologies at home or in community settings.
  2. Effective access and use of ICTs: where individuals feel that ICTs are accessible to them at home or in community settings, but they may not use these in a meaningful way.
  3. Meaningful engagement with ICTs and content: where individuals experiences control and choice over technology and content and use these in relevant and useful ways.
  4. Participation in society: meaningful engagement with ICTs will have consequences for individuals’ ability to engage in society, e.g. in terms of social activity, political activity and consumption activity.

This view underlines that not all digital access is equal and that being formally connected doesn’t necessarily bring about meaningful engagement. This framework is interesting in the debate about full-fibre broadband where the divide we see today is perhaps between the people with fast and effective internet, while communities with slow broadband still feel disconnected. Although some critics have questioned the political importance of prioritising full-fibre broadband for everyone, Selwyn’s conceptualisation of the digital divide highlights the social implications of being disconnected. Further, Labour’s vision have framed this disconnection as a political issue, highlighting the disadvantages that come with weak internet access – especially at a time when the UK are digitising more and more welfare services, such as the implementation of universal credit, a ‘digital by default’ benefit scheme. Following Selwyn, weak internet access in the UK today may then be affecting people’s ability to participate in society and access public services.

I’ll probably write more on this as I find the political aspects interesting as well.




Selwyn, N., 2004. Reconsidering political and popular understandings of the digital divide. New media & society, 6(3), pp. 341-362.

Van Dijk, J.A., 2005. The deepening divide: Inequality in the information society. Sage Publications.


Dilemmas in digital childhood and parenting

One of the fun things about being back at university is gaining new perspectives on issues I encountered while working. After finishing my bachelor’s degree, I worked for two years at a small Danish charity, dedicated to making research on children and families accessible for professionals “on the ground” through research-based newsletters, seminars and conferences.

One of the topics I had the chance to write and talk to professionals about was digital childhood and parenting – specifically the impact of digital media on parent-child-relationships. As the content was mainly aimed towards health professionals, psychologists and teachers, most of the research I looked at focused on child development and the potential consequences of parents being distracted by mobile devices (see e.g. Kildare & Middlemiss, 2017 and McDaniel & Radesky, 2018). In the literature, McDaniel and Coyne (2016) introduce the concept of technoference to understand the “everyday interruptions in interpersonal interactions or time spent together that occur due to digital and mobile technology devices” (McDaniel & Coyne, 2016).

In Scandinavia, nurses and health visitors are especially concerned about technoference around infants and babies as early parent-child interactions and eye-contact with care givers are vital to ensure secure attachment. The concern has now resulted in new professional guidelines for Norwegian health visitors that include talking to parents about their use of mobile devices around their young children – an example of professionals adapting their practice to developments in digital society (Hansen et. al., 2017).

The video below, created by Norwegian public broadcaster NRK, is recreating Dr. Edward Tronick’s still-face experiment. The original experiment observes a mother and a baby interacting for one minute, followed by one minute of unresponsiveness from the mother, as the baby tries to win back her attention. In NRK’s video, the mother spends a minute looking at her phone rather than her child and the distressed reaction of the baby is similar to observations from the still-face experiment (The video is in Norwegian, but the effect is still pretty clear for non-Norwegian speakers).

Hva skjer med babyen din når du gir oppmerksomheten til mobilen? Vi gjorde et eksperiment med tre små, tre mødre og tre mobiler. Se Teknologien som forandrer oss:

Posted by NRK Viten on Wednesday, 12 October 2016


This is just one example of how digital technologies are now making their way into family life, and it is naturally important to understand the potential consequences on children’s development when growing up around digital devices. But as a (future) sociologist, I am also constantly asked to consider the broader social and cultural context that drives such changes in family life – and in that respect we can note that technoference doesn’t occur in a vacuum.

From a sociological perspective it is then important to consider, for example, the conditions that cause parents to get distracted in this way. Kildcare and Middlemiss’ review from 2017 indicates that parents have complex and ambiguous feelings about their use of digital technology around their children. Research shows that parents often feel guilty about using mobile devices when spending time with their family, but at the same time struggle with expectations to be available 24/7 for work, as they feel obliged to pick up the phone and reply to e-mails in their so-called free time. Some parents also report that mobile devices make them better parents and help them navigate and coordinate a busy family and work life (Kildcare & Middlemiss, 2017).

Parents use of digital technologies therefore seems embedded in current norms and expectations of availability at the workplace and the continously blurred boundaries between working or public life and private life.

Other interesting aspects to look at could be the notion of sharenting where contemporary parents – especially mothers – share content online related to their children (see e.g. Lazard et. al., 2019), as well as the growing number of apps aimed at recording and monitoring child development, sometimes even before the baby is born (Leaver, 2015). Such trends in digital parenting might even make parents feel guilty if they are not documenting and sharing their kids’ childhoods, which might work against attempts of limiting parents’ use of technologies around their children.

I suppose what I’m trying to get at is that we – professionals, parents, families, everyone – are still trying to figure out these digital dilemmas. How do you parent in a digital world? What are the norms around screen use? What are the expectations of ‘being present’, both online and offline? And what are the implications for children and parents when digital devices are a part of everyday life?

All of these questions are being debated and researched – and in my experience, we need as many perspectives on this as possible. That is, we need both the medical and psychological research on the impacts of technoference on children, but it is just as important to understand the social processes that shape and condition the choices of parents today.


Hansen, A. H., Strømsvåg, K. H., Valla, L. & Misvær, N. 2017. Mobilbruk kan skade samspillet med barn. Sykepleien 105(9), 50-53.

Kildare, C. A., & Middlemiss, W. (2017). Impact of parents mobile device use on parent-child interaction: A literature review. Computers in Human Behavior, 75, 579-593.

Lazard, L., Capdevila, R., Dann, C., Locke, A. and Roper, S., 2019. Sharenting: Pride, affect and the day‐to‐day politics of digital mothering. Social and Personality Psychology Compass, 13(4), p.e12443.

Leaver, T., 2015. Researching the ends of identity: Birth and death on social media. Social Media+ Society, 1(1), p. 1-2.

McDaniel, B. T. and Coyne, S. M., 2016. “Technoference”: The interference of technology in couple relationships and implications for women’s personal and relational well-being. Psychology of Popular Media Culture, 5(1), p.85.

McDaniel, B. T. and Radesky, J. S., 2018. Technoference: Parent distraction with technology and associations with child behavior problems. Child development, 89(1), pp.100-109.

Let’s talk about techno-chauvinism and machine bias…

To get this blog off the ground, I just wanted to share a podcast that I really enjoyed – as a sort of ‘lightweight’ introduction to some of the themes I’m going to explore on here.

I don’t know how many of you watch Queer Eye? (Yes, the amazingly wholesome show about five queer men making the world a better place, one episode at a time).

Anyway, Jonathan Van Ness – pictured above and one of the Fab 5 – has a podcast called Getting Curious. Here, JVN chats to experts, researchers and activists on a range of topics (you can find anything from vaccines and Planned Parenthood to gun violence and foster cats). Earlier this year, I listened to the episode What is Techno-Chauvinism and Are We Relying Too Much on Unconscious Biased Systems with Meredith Broussard, NYU Associate Professor, data journalist, AI researcher and author.

In this episode, JVN and Broussard discuss various concerns related to digital technology – in an easy and conversational manner. This podcast really opened my eyes to issues around the inherent bias in algorithms, code and artificial intelligence, and how the implementation of digital technology may reproduce existing societal divisions and inequalities.

Broussard uses the concept techno-chauvinism to describe our collective fascination with and belief in technology as neutral, objective and basically the solution to everything (Broussard, 2018). As someone who pretty much gave up on maths in middle school (Nah, that’s a lie, I actually did make it through a few statistics courses as an undergraduate student in Copenhagen, but you get the gist) and who doesn’t know the first thing about coding, I was probably guilty of believing this techno-utopian narrative. But, as I have delved into critical literature on digital technology this semester, I have become more aware of the social shaping of technology (e.g. as explored in Mackenzie & Wajcman, 1999) and – just as important – the fact that data is never neutral or ‘raw’ (Gitelman, 2013), but rather embedded in a political, social and cultural context. As one of my professors put it: “We always seem to bring our human baggage into technological innovations”.

So, wait, what do I actually mean by that? What are the implications of this?

Well, one example is the use of COMPAS in the US criminal justice system – a software programme designed to predict the likelihood that an individual who’s been arrested will commit another crime. Such risk assessment scores are apparently not uncommon in American courtrooms and may influence judges’ decisions. But when a group of journalists from ProPublica investigated risk scores for over 7000 individuals and whether these people had actually been charged with new crimes, they found significant disparities when it came to race. That is, the system was more likely to flag black defendants as future criminals than white defendants – and vice versa, white defendants were more likely to be labelled as low-risk when they actually did go on to commit new crimes.

Simply put, humans who are biased make technology that is biased.

Before listening to this podcast, I was not aware of the way algorithms are now being implemented in various social and public services, especially in the US, affecting who gets benefits, housing, insurance etc. If you’re interested in more examples of this, Virginia Eubanks explores the impacts of data mining, algorithms and predictive analytics in her brilliant book ‘Automating Inequality – How High-Tech Tools Profile, Police and Punish the Poor’. Eubanks is a political scientist and her writing on technology and social justice has really inspired me this semester – so much that I am considering writing my MSc dissertation on some of these issues (so if you stick around, you’ll probably hear me go on and on about this in the coming months…).

Anyway, give this podcast a go! I am always excited about ways to make academic knowledge more accessible – another reason why I’ll try to commit to this blog! – and JVN’s podcasts are definitely that.



And because I am a responsible academic, even while blogging, here are my sources:

Angwin, J., Larson, J., Mattu, S. & Kirchner, L. 2016. Machine Bias. ProPublica, May 23, 2016.

Broussard, M., 2018. Artificial unintelligence: How computers misunderstand the world. MIT Press.

Eubanks, V., 2018. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.

Gitelman, Lisa. 2013. Introduction. In Raw Data is an Oxymoron. Cambridge, MA: MIT Press.

Mackenzie, D., & Wajcman, J. 1999. Introduction. In The Social Shaping of Technology. 2nd ed., Buckingham, UK: Open University Press.

Hiya internet

So, this is my first attempt at blogging… (can you believe it, I’m 26 years old, grew up during the rise of the internet and I generally like to express myself in writing). Apparently I needed a little push from two professors on my master’s degree in Sociology to dive into this.

The intention is to use this blog for more-or-less coherent thoughts and reflections during my studies. Areas of interests are (for the time being…) opportunities and issues in digital society. My bachelor’s dissertation (from University of Copenhagen, yep, I’m Danish) revolved around digital mobilisation on Twitter and the way Twitter can eable collective storytelling of resistance and injustice – here via the hashtag #BlackLivesMatter. This was spring 2017 – seems like ages ago, doesn’t it? – right around Donald Trump’s inaugeration. My thesis back then was hopeful, highlighting the opportunities to construct mobilising narratives on social media and the way online engagement was often intertwined with ‘offline’ protests in the streets.

Looking back, perhaps I was a little bit naïve at the time. In my fascination with words, discourse and storytelling, I overlooked the role of powerful social media platforms in today’s digital world as well as pressing concerns about data and privacy. Now, two years later, I am back in a university setting – now at University of Edinburgh. Here, I am determined to continue to explore sociological questions of power, inequality and resistance in a world of digital platforms, big data and algorithms.

This blog will delve into some of these themes as I share insights from my readings, but also recommend great podcasts, documentaries, news articles and other relevant media.

Stay tuned (if you want to).