5 Questions about Digital Sustainability to… Professor Barbara Prainsack

Professor Barbara Prainsack

Professor Barbara Prainsack sees ‘data solidarity’ as being closely aligned with digital sustainability. She has recently been involved in the development of a public-value assessment tool that provides a new perspective on the ‘data-benefit balance’ discussion. Besides her roles as head of the Research Platform Governance of Digital Practices at the University of Vienna, Austria, and chair of the European Group on Ethics in Science and New Technologies (EGE), she is currently doing a Fellowship at the Berlin Institute for Advanced Study. 

“Data solidarity can create a safe space for taking more risks to create more public value”

What does the term ‘digital sustainability’ mean to you, and why is it so important in society as a whole? 

As I see it, digital sustainability is about finding a successful long-term approach to dealing with all the data that is being generated nowadays for the good of society as a whole. It has become apparent that the issues emerging from datafication and data use in the digital era can’t be addressed effectively with the tools of the paper age. Too many people fall outside of the current frameworks that seek to protect data subjects from the risks associated with digital practices. In addition, digital practices are embedded in stark power asymmetries; the benefits of data sharing and data usage are not being sufficiently shared with the people and communities that the data comes from, so of course it is necessary to find a solution to this. 

However, while I am in favour of giving individuals more control over their data, I do not believe that this will be enough to solve all the challenges of the digital era – especially not the structural ones relating to the power asymmetries between the citizen and corporate or state actors. These asymmetries help powerful commercial players more than anyone, whilst also impeding collective control and oversight of data. Transferring all responsibility to the individual does not consider these existing asymmetries. Additionally, I am very much against the idea of paying individuals for their data. In my view, this will only exacerbate inequalities between rich and poor. For example, it would allow better-off people to pay for services with money, while people on low incomes pay with data and therefore a loss of privacy. Moreover, people in low-income countries are likely to get significantly less remuneration than those in high-income countries if the fee is adjusted to the local living standards and the achievable market price. 

Inequities in digital societies don’t only hurt the people who are affected by them in direct and immediate ways, but also hurt societies as a whole. Right now, commercial profits are being made with data, but communities are not receiving a fair share of the benefits. So although digital sustainability is closely aligned with this, I prefer to use the term ‘solidarity-based data governance’ or ‘data solidarity’, which is built around the core premise that the benefits and the risks of digital practices need to be borne by societies collectively. 


Why do you regard data solidarity as the solution? 

Data solidarity offers an approach to address the issues raised above by increasing collective control, responsibility, oversight and ownership over digital data and resources. The emphasis is on ensuring that both the harms and the benefits of data are distributed equitably within and across societies, because even people who are not heavy users of digital technologies contribute to the benefits that emerge from digital data and practices nowadays. 

The healthcare sector is a good example of this; data about people’s bodies and behaviours is captured by the healthcare system, and monitored and analysed by researchers (which also increasingly include tech companies) aimed at developing medications and treatments to improve public health. Similarly, everyone bears risks associated with data – not only that their privacy could be infringed on the basis of their own data, but also that they could be discriminated against, profiled or otherwise harmed as a result of other people’s data, e.g. via data analytics and other practices. We see this in fields as diverse as policing, administration and insurance. 

This has led to a recent call by a report of the Lancet and Financial Times Commission on Governing Health Futures 2030 for digital technologies in health and healthcare to be driven by public purpose, and not by profit. There are many situations in digital societies in which individual and collective interests are not opposed in principle. But what happens when a person’s individual rights and interests do conflict with the collective public interest? We need to put good mechanisms in place to ensure that individual rights are not overruled by the public interest; we need an approach that accommodates the ways in which justice, equity or privacy are both individual rights and collective goods at the same time.


How can we strike a balance between individual data rights and the collective public interest?

I first got the idea for a data solidarity framework when preparing a workshop presentation in 2012. Since then, the project has gradually gained more support and taken on more structure. Within my role at the University of Vienna, I have been working with an ever-growing group of people, including the newly established Digital Transformation Health Laboratory at the University of Geneva. 

Data solidarity has three pillars. The first pillar is ‘Facilitating good data use’, which means weighing up the individual and collective benefits against the possible risks of harm to people or society. To operationalise this, we are very excited to have recently launched the first proof-of-principle version of our public-value assessment tool called Pluto. It comprises 24 questions, which we put together based on several rounds of comments from stakeholders and experts plus a wide literature review. The questions are weighted to give plus points to benefits and minus points to risks, with more weight given to benefits and risks affecting underserved communities or disadvantaged groups, and also depending on the expected nature and severity of the impact. The score at the end indicates the potential public value of using the data, having taken the collective benefits and also the individual and group-level risks into account. In the case of harmful data uses that only present risks and very few public benefits in return, these should be prohibited. In contrast, where risks are low and benefits are high, those data uses should receive more support. We all know of cases where data may not be used even if it will create public value. Some regulatory requirements may be too expensive or too onerous to meet, especially for smaller or non-profit organisations. And not all red tape actually protects patients’ interests! It is in those cases, when data use has low risks and promises to bring significant public benefits, that we need regulatory exemptions or other ways of enabling data to be used. 

The second pillar is ‘Preventing and mitigating harm’. If the risks are high but the potential public value is also high, then we need to bring down the risk if the data use is to go ahead. Similarly, putting effective harm mitigation tools in place will ensure that taking risks in pursuit of creating public value does not come at the expense of individuals. In this context, we’ve developed the idea of ‘Harm Mitigation Bodies’ (HMBs), which people could turn to if they felt they were harmed by lawful or unlawful data use. HMBs could provide financial support in some cases, and would also help to obtain a better understanding of the nature, severity and frequency of harms occurring from both lawful and unlawful data use in order to improve governance and address concerns. 

The third and final pillar is ‘Returning profits to the public domain’. In cases where the main benefits of data use are commercial profits, how can those profits be shared more fairly – both within societies and across societies? In a nutshell, data should be seen as belonging to a community in a moral sense. Any for-profit endeavour that doesn’t create a lot of public value should share some commercial profit with the local communities instead.


Can you give some examples?

A positive example in the European health data space is Findata, the Finnish Social and Health Data Permit Authority. The idea behind it is for a public agency to help organisations obtain permits for the secondary use of social and healthcare data, and also to offer analysis and pre-processing tools. I love the idea of creating a public body that provides practical help to organisations such as NGOs, charities and small enterprises that can create public value with their data uses but don’t have deep enough pockets to do it themselves. 

Because of my personal and academic background, most of the work has been done in the healthcare domain so far. But I see plenty of potential for data solidarity thinking to be applied in other domains too, especially from the perspective of Pillar 3. However, the benefit-sharing always needs to fit the specific context and community, which is why it can work particularly well at a local level. In agriculture, for example, research and advancements in seed breeding for the collective good may result in intellectual property (IP) protections that mean that certain local communities are no longer able to grow certain crops. In that case, creating licensed users or sharing the IP income with those local communities could be ways of returning profits to the public domain. But because of the specific nature of this benefit-sharing approach, there is not a one-size-fits-all model.

If we think about financial services, some of today’s business models are grounded on data uses with little to no public value and high risks for individuals; they could result in harmful effects, such as certain types of users being profiled and categorised as subprime lenders. In this case, the recommendation from a data solidarity perspective would be to stop using data in that way, and instead to move towards data uses that have fewer risks for people and ideally also create public value. And if they do not create a lot of public value, then commercial profits should be shared with the public more extensively than is the case today.


What could this mean for businesses?

Data solidarity helps us to move away from the perception that certain types of data are ‘dangerous’, that we should not share data with certain types of entities, and even that sharing data for commercial purposes is a ‘bad’ thing. Currently, different types of data users – think of public-sector versus private-sector organisations – are subject to different regulatory requirements. But the reality is not so clear-cut; some public institutions sometimes use data in a problematic way, while some commercial organisations sometimes use it in way that benefits society. Therefore, I see data solidarity as a way for companies to avoid getting branded as ‘bad guys’ just because they are commercial entities. Instead, they should be judged according to what they are doing. After all, making a profit is not a bad thing in itself, but you need to make sure that you offer some benefits in return – either by adding public value or by paying back in some other way. 

Even if businesses do not subscribe to the ‘greater equity’ idea that underlies the data solidarity framework, then they could still support it for practical reasons. A stronger focus on public value from data will help to increase people’s trust in data use and therefore encourage more data sharing. Moreover, the data solidarity framework enables us to shift the focus onto generating more public value with data by creating a safe space within which we can take certain well-defined risks with our use of data, which can lead to more innovation. For example, a data solidarity approach in the financial services domain would mean that companies would be allowed to share user data more easily with fewer restrictions, providing that they can prove that it adds public value in some way – such as by improving the provision of services – and does not pose high risks to the people from whom the data is collected. The Pluto tool is still in development, but I hope it can be used by regulators, policymakers and institutions, but also by businesses to help them consider the public value of what they’re doing as the basis for a brighter digital future for everyone.

Photograph by: Johanna Schwaiger

Digital CSR
Nehmen Sie Kontakt auf

Bereit für die Zusammenarbeit mit den Experten von INNOPAY?