SG Podcast ‘Van Droom Naar Daad’- Sleutelen Aan Het Leven & Verantwoorde Innovatie

Lekker van de zomer aan het genieten? Onze podcast-serie past uitermate goed bij een fris zomerbriesje aan het strand, op de fiets of in de trein.  In binnen- en buitenland.

De volgende podcasts staan voor je klaar: allereerst met micro-bioloog Bertus Beaumont waarin we de geheimen van de biologie ontfutselen en onderzoeken of we aan de vooravond staan van de synthetische mens. Daaropvolgend bevragen we ethicus Ibo van de Poel hoe de toekomst verantwoord vorm te geven. In deze zoektocht stuitten we onder meer op het belang van emoties voor besluitvorming en nemen zelfs het referendum in ogenschouw. Je kunt de podcasts beluisteren via Spotify, Soundcloud, Stitcher en ITunes.

Veel luisterplezier gewenst!


SG Podcast: Van Droom Naar Daad

Wil jij lekker onbekommerd van de zomer genieten maar nog wel wat kennis opdoen? Ga dan luisteren naar onze nieuwe podcast-serie “Van Droom Naar Daad”. Hierin gaan we in gesprek met de Delftse wetenschappers over de toekomst. Hoe zien zij de wereld voor zich en welke rol nemen zij daarin op. Ga mee op ontdekkingstocht in een wekelijkse aflevering zodat jij je deze zomer niet hoeft te vervelen. In de allereerste aflevering voelen we hoogleraar bio-elektronica Wouter Serdijn aan de tand. Ziet hij de opkomst van de bionische mens voor zich? Een versmelting van mens en machine? En hoe dit te beschouwen, positief of negatief? En wat is dat nou precies bio-elektronica, wat kunnen we daar nu al mee? Je kunt de podcast beluisteren via Soundcloud, Stitcher en ITunes.

SG Podcast “Van Droom Naar Daad”

Wil jij lekker onbekommerd van de zomer genieten maar nog wel wat kennis opdoen? Ga dan luisteren naar onze nieuwe podcast-serie “Van Droom Naar Daad”. Hierin gaan we in gesprek met de Delftse wetenschappers over de toekomst. Hoe zien zij de wereld voor zich en welke rol nemen zij daarin op. Ga mee op ontdekkingstocht in een wekelijkse aflevering zodat jij je deze zomer niet hoeft te vervelen. In de allereerste aflevering voelen we hoogleraar bio-elektronica Wouter Serdijn aan de tand. Ziet hij de opkomst van de bionische mens voor zich? Een versmelting van mens en machine? En hoe dit te beschouwen, positief of negatief? En wat is dat nou precies bio-elektronica, wat kunnen we daar nu al mee? Je kunt de podcast beluisteren via Soundcloud en Stitcher. In de nabije toekomst ook op ITunes en Spotify.

Share Your Ideas

Share Your Ideas

Share your ideas for the most enlightening lectures, debates, discussions and workshops in Delft!


Do you have a critical mind and a keen eye? Are you interested in promoting lively debate and discussion? Then we are looking for you!

Your idea

What would you like to add to our programme? Studium Generale wants to offer you the chance to share your ideas and organise your own event.


Fill out the web form below and we will be in touch!

We Are Public!

Sinds kort wordt een deel van ons programma ook aangeboden door We Are Public. Deze maand is We Are Public nog op zoek naar 300 cultuuroptimisten in Delft.

Studium Generale helpt graag mee met de zoektocht naar cultuuroptimisten! Vind jij ook dat cultuur meer publiek en meer inkomsten verdient? Doe mee en haal We Are Public naar Delft! Voor € 15 per maand ben je al lid en krijg je dagelijks gratis toegang tot de beste concerten, exposities, voorstellingen en films.

Sluit je nu aan via

Hesiodos – the new creative & literary magazine on campus

Remember when there was a creative magazine in Delft where you could get published and read what other students and staff members are making?

Neither do we! But in order to stimulate such creativity a team of students and staff are pioneering a new magazine, Hesiodos, and they are calling for your content. Are you a writer, poet, illustrator, cartoonist, or a photographer? A budding columnist perhaps with a scathing opinion? If you need an outlet for your creative, non-academic productions, look no further. Hesiodos will be published for the first time in May 2018, in print. Contributors can send their questions and submissions to or check out their Facebook page . Submissions are preferably in English, but there is room for Dutch content as well. You can also be published anonymously, although the editors need be aware of your identity.

Take a closer look at this for the submission guidelines: 

The best comics & illustrations

The Creative Skills Workshop series that we’ve organized together with Sports&Culture will soon draw to a close. Dozens of Delftian students will look back on some seriously fun sessions with authors, journalists, and comics artists.

The comics & illustrations workshop, led by Stephan Timmers, held a vote to choose the three best contributions and agreed to share them online. Here are the winners:

Rafal Tarczynski – Best drawing (“finally summer” in the Netherlands)

Birute Leipute: Best joke

Laura van Beek – Best content

70 Years of Critical Thinking


P1 copy

Last year (2016) Studium Generale celebrated its 70th birthday, and what better way to honor our origins as TU Delft’s patron of critical thought than by questioning the current moral status of our academic community?

In February and March 2016, 300 students from all faculties participated in the SG Conscience Survey where we asked them to reflect on their own moral code, the ethical problems and promises of their field of study, and how prepared they feel to deal with difficult moral issues as engineers. Though intended more as a poll than as a serious academic survey, students were highly enthusiastic and the results were compelling. So we prepared a presentation of the results which was exhibited back in September.

You’ll find the results here, illustrated by Total Shot productions, as well as a number of responses from industry leaders in the Netherlands and at the university. Enjoy! And think. And enjoy.

1.  P2 copy

2. P3 copy      

3.  P4 copy

4. P5 copy

5. P6 copy

6. P7 copy


Surveys are fun! That was lesson number one. We now have more insight into the hearts and minds of students at the TU Delft, what their hopes and fears are. And if we’re fully honest, what we’ve seen is overwhelmingly positive.*

*If we can add one critical note, and we wouldn’t be SG if we didn’t, there was one disappointing result. When asked if they felt properly prepared by the university to deal with difficult moral decisions (panel 4), only 11% said yes. That’s shockingly low. Considering the pivotal role that engineers play in shaping the world, we hope the university will take this result to heart and think of ways to help students more. SG will continue to stimulate critical thought and self-reflection, but the conscience of the university is something we should all actively try to be.


Many of the people and institutions we reached out to for a response on the survey results were gracious enough to provide one. Others did not respond at all. And still others responded but without really digging into the issues at hand. We would have loved to have read the response from the ministries of Defense and Education, but alas. So many students unwilling to be maneuvered into the weapons and fossil fuels industries, while that’s exactly what many of them are being prepared for? Seems like a bit of a problem. But we’ll just have to imagine what they might say. In the comments section below, for instance.

2016 07 P3 Justice

Thanks for reading!


draw-your-own-3me draw-your-own-anka-mulder-2 draw-your-own-citg

draw-your-own-dream-hall draw-your-own-dsm draw-your-own-ethiek-ibo-vd-poel

draw-your-own-ethiek-sabnie-roeser draw-your-own-onderwijs-cultuur draw-your-own-philips

draw-your-own-shell draw-your-own-yesdelftdraw-your-own-defensie

Van Hasselt Lecture 2016: Big Data, Human Rights and the Ethics of Scientific Research

Big Data, Human Rights and the Ethics of Scientific Research

John Tasioulas ABC Religion and Ethics Australia
Updated 1 Dec 2016 (First posted 30 Nov 2016)

John Tasioulas is the Director of the Yeoh Tiong Lay Centre for Politics, Philosophy and Law at King’s College, London. This article is adapted from the 2016 Van Hasselt Lecture, which he recently delivered at the Delft University of Technology.

As we all know, digitization is radically transforming our lives. The internet, mobile devices, massive data collections and the analytics applied to them are propelling a digital revolution. The World Economic Forum spoke recently of a Fourth Industrial Revolution.

There are many inter-related facets to this digital revolution, but at the heart of it lie the increased capabilities to amass and store data and the analytical models applied to them for yielding knowledge.

This is the Big Data phenomenon. A phenomenon that is a rapidly advancing, pervading increasing areas of human existence from life insurance to the sentencing of criminals, and one that seems to be here to stay.

Yet the apparently inexorable rise of Big Data has provoked sharply conflicting responses.

At one end of the spectrum, we find unbridled enthusiasm about the proliferating opportunities to improve our lives; at the other end, there is increasing alarm at the pressures and distortions to which Big Data applications subject our established patterns of life.

For every opportunity that Big Data presents, there seems to be a corresponding anxiety.

So, on the one hand, Big Data has generated hopes about the potential good that it can bring to all facets of our lives. Some of the most beneficial applications of big data are expected in the area of biomedical research and public health. Early detection of disease outbreaks, identifying the genomic underpinning of diseases, or recognizing patterns of unknown and unreported adverse side-effects of blockbuster drugs, are just some of the areas in which big data applications have delivered promising results.

But, on the other hand, the Snowden revelations about government surveillance have underscored growing fears about how certain uses of Big Data can undermine not just privacy, but ultimately trust, democracy and liberty. The stream of reports about hacked databases, data kidnapping and other cyber-crime have stoked fears of a new vulnerability in the digital world.

And so the pressing question arises: can we harness the potential of big data while keeping faith with our ethical values?

The appeal to ethics is often interpreted as a conservative gesture, one hostile to scientific progress. However, it is a bad mistake to view ethics and science as inherently in tension, to think of ethics as just a series of roadblocks on the path to scientific knowledge. Science is itself an inherently ethical enterprise. In order to grasp this, we need a suitably broad interpretation of the “ethical.”

Ethics is about goods that we have reason – and sometimes even an obligation – to pursue, such as the good of knowledge that can be used to bring about significant improvements in health. In this way, health research is an ethical enterprise from the very outset. After all, it would be deeply uncharitable to regard scientists engaged with Big Data as merely pursuing their narrow self-interest, whether defined in terms of monetary enrichment, satisfaction of curiosity, or career advancement. Instead, they are seeking public goods, goods that benefit all, such as scientific knowledge, which is both intrinsically valuable and instrumentally valuable as a means of realizing goods such as health, education, enjoyment, and so on.

But there are, of course, ethical considerations bearing on how we may properly pursue these goods. In particular, these considerations centrally include human rights, but not only human rights.

The rights to privacy and science

Both the right to privacy and the right to science appear in the Universal Declaration of Human Rights of 1948:

No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.

(Universal Declaration of Human Rights, Article 12)

(1) Everyone has the right freely to participate in the cultural life of the community, to enjoy the arts and to share in scientific advancement and its benefits.

(2) Everyone has the right to the protection of the moral and material interests resulting from any scientific, literary or artistic production of which he is the author.

(Universal Declaration of Human Rights, Article 27)

Unlike the right to privacy, the “right to science” is an unfamiliar right, but one with massive unexplored potential. My focus here is on the first component of this right: the idea that everyone has a right to benefit from scientific advances, and the idea that people have a right actively to participate in scientific inquiry. Ordinary people have a right, in other words, not just to be passive beneficiaries of advances made by professional scientists, but to engage in scientific research themselves.

Although a right of popular participation is a familiar idea in the realm of democratic politics, the right to science takes participation much further. The result is that participation infuses and potentially radically transforms our modes of scientific practice and our culture more generally. Indeed, I would argue that democratic political participation itself requires public participation in the wider culture in order to be effective.

In recent years, technological developments have enabled increasing numbers of people who are not professional scientists to exercise their right to participation, sometimes as part of the phenomenon dubbed “citizen science.” So, in the area of health research, devices that collect and transmit data about individuals facilitate the exercise of the right. Health research produced by such “citizen scientists,” often in relation to rare diseases that are not profitable enough to attract the attention of pharmaceutical companies, has appeared in leading scientific journals, including Nature.

Both the human right to science and to privacy are binding norms of international law, and they are also given force by the laws of regional and domestic jurisdictions. Behind these legal manifestations, however, lies human rights conceived first and foremost as universal moral rights – in other words, moral rights possessed by all human beings simply in virtue of their humanity. It is these background norms of human rights morality that control the proper interpretation of the rights to privacy and science in human rights law.

Understanding human rights

I want to make four points about how we should understand the morality of human rights: the grounding of human rights – that is, the values that justify human rights norms; the content of human rights – that is, the duties associated with them; the bearers of human rights duties; and what I will call the incompleteness of human rights.


Regarding the grounding of human rights, I think we should accept two ideas. First, human rights are not fundamental norms, but owe their existence to the way in which they protect other values. And, second, any given human right does not typically protect one master value – such as human dignity or freedom – but a number of values.

So, for example, the right to privacy protects a variety of interests: our interest in being able to make life-choices without interference or surveillance, our interest in not being humiliated, in forming and maintaining intimate relationships, and so on.

The right to science also serves a multiplicity of interests. These include: the interest in acquiring knowledge of the world, the interest in achievement (where this knowledge is acquired through one’s own successful efforts), the interest in community with others (participation in science typically involves cooperation with other colleagues towards the shared goal of generalizable scientific knowledge).


A human right protects our interests, but it is not the same as those interests. The human right identifies the extent to which the protection of our interests imposes a duty or obligation on others. The duty specifies what the duty-bearer must do or refrain from doing in order to comply with the right. The duty associated with a human right is its practical content.

So, for example, if I am in dire need of a kidney transplant, my interest will be greatly served if you donate to me your spare healthy kidney. But I do not have a right to your kidney, because you do not have a duty to serve my interest in that way. You would not be wronging me in refraining from donating your spare health kidney.

The process of specifying the duties associated with human rights is a complex one. Ought implies can, so a person can only be under a duty that it is feasible to impose it on them. Minimally, it must be generally possible for duty-bearers to do what they have a duty to do: there can be no duty to do the impossible. So, even if in theory certain security measures – such as fool-proof anonymization of data – would enhance my interest in privacy, there is no right to them if there is no way to implement those measures given the current and foreseeable state of technological capacities.

In addition, even if it is possible to do something, imposing a duty to do it must not be unduly burdensome. So, for example, my interest in privacy does not impose an obligation on the police not to require me to tell them my identity if they find me behaving suspiciously. Recognising such a right would be unduly burdensome in relation to other important values, such as the detection and prevention of crime. Or, back to our original example, my right to life does not include an obligation on your part to donate your spare healthy kidney, as this would be excessively burdensome for you.

Duty bearers

Who bears the duties imposed by human rights? Human rights law, especially international human rights law, treats the state as the exclusive, or at least the primary, duty bearer. However, there is nothing in the underlying idea of a moral human right that restricts duty-bearers to the state and its organs.

Some rights, such as the right to a fair trial, might be primarily targeted at the state, but non-state actors, ranging from individuals to transnational corporations, may also bear human rights duties. This has been recently underlined by the UN Guiding Principles on Business and Human Rights, which were endorsed in 2011, and are political rather than legal principles. The Guiding Principles directly impose an obligation to respect human rights on all corporations, irrespective of whatever the law may say in the countries in which they operate.

This latter issue is of special relevance to Big Data for several reasons. Corporations engage in massive data collection that stretches over many national jurisdictions. They possess powerful computational tools that are opaque to outsiders. Given the accelerating pace of developments in Big Data, national laws designed in the analog era may be inadequate in protecting rights affected by big data. Big Data companies have responsibilities to respect privacy rights even when law does not explicitly demand such protections.

Corporations also have responsibilities to respect the right to science. For example, those with control over large scientific data repositories tend to have exclusive legal rights to their use. Even if they exploit the data for scientific purposes themselves, they may have obligations to make such data sets accessible to other researchers. If obstacles to sharing impede scientific progress, they may also impede our right to share in, and enjoy, its benefits.


Although human rights are weighty moral standards, they are incomplete. They do not exhaust all of the ethical standards relevant to law and public policy. To begin with, there are duties that are not associated with rights, such as duties to oneself (for instance, to develop one’s talents or to maintain one’s health), or duties of charity and solidarity. The breach of these duties is a wrong, but no particular person is wronged, in the sense of having their right violated, when this happens.

In addition to the domain of duties, law and public policy needs to be responsive to a whole range of ethical concerns, including the fulfilment of human needs, economic prosperity, the preservation of nature, the furtherance of the common good, even beyond the point at which any of these concerns generate duties, let alone rights-based obligations owed to some individual.

It is vital to stress this notion of incompleteness, because otherwise we lapse into the error of pressing human rights to do all the ethical work that needs to be done, which risks distorting them while simultaneously marginalizing non-rights-based considerations.

Rights in conflict?

Human rights are not ahistorical and unchanging. Instead, they can evolve over time. This is because that to which we have a human right depends on what is practically feasible – what is possible and what is not excessively burdensome. And what is feasible changes over time as our circumstances change, thanks to technological evolution, climate change, new modes of economic production and social organization, and so on. Now, digitization and Big Data constitute major upheavals in the conditions of contemporary life.

This means that new human rights can come into being that did not exist before. One example is a right to internet access, as proposed by Frank La Rue, the UN’s Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. Existing rights also can change in shape – so, for example, the right to health may encompass new forms of treatment as their cost declines over time, such as anti-retrovirals, thereby rendering the duty to supply that treatment not unduly burdensome.

But the changes need not always be in a positive direction. New circumstances can mean we lose human rights we formerly had. In one grim scenario, perhaps ongoing climate change will mean that we can no longer reasonably affirm a right to an adequate standard of living, because this becomes simply impossible to secure, but only to a subsistence level of provision (at best).

This is one of the main challenges of human rights thinking. It requires those who are serious about human rights to scrutinize the conditions of life, the ever-changing profile of opportunities and risks. And it goes without saying that part of the difficulty here is that this is a multi-disciplinary endeavour – human rights is not the monopoly of any one discipline, whether philosophy, law or something else.

What does this mean for the complex dynamic between the rights to privacy and to science? They may appear to be in tension with each other. The right to science demands that opportunities be made available to participate and benefit from scientific advances. In some areas, however, such advances can be achieved only through the use of identifiable data that carries a risk to privacy.

Can the right to science be in conflict with the right to privacy? Is there a duty to enable the use of identifiable data and at the same time a duty to protect the privacy of those who could be identified? If these duties are in conflict, what is to be done? Are we to trade off one right, and the interests it represents, against the other? How is that trade-off to be determined?

Now, I think it is a mistake to suppose that one right can be secured only if another is systematically violated in the process. Before embracing such a drastic conclusion – of rights in constant conflict – we need to take a step back.

Specifying the duties imposed by human rights is a holistic process. Duties can’t be specified one-by-one, but only as a totality. It’s a matter of solving a simultaneous equation with multiple variables. Human rights impose duties, and duties are stringent moral reasons that are not regularly or easily overridden by other considerations, including by other duties. To think of human rights as habitually subject to trade-offs, therefore, is to misunderstand their nature. It follows that the duties associated with human rights must in general be jointly satisfiable, with conflicts arising only in exceptional, emergency-like, circumstances.

So, apparent conflicts like the one between the rights to privacy and to science are to be largely pre-empted at the level of adequately specifying the duties associated with each human right. To illustrate this point let me use an example with great contemporary relevance: the application of Big Data analytics to electronic health records.

The dynamic relationship between Big Data and human rights

The maintenance of electronic health records (EHRs) is increasingly becoming standard practice in health care. The primary purpose of EHRs is to store patient information that is used in the clinical care of the patient. But EHR data from a large number of patients can be pooled together, linked to other databases, and queried with important scientific questions. To date, however, EHR data are underutilized in health research and public health practice relative to the benefits they are capable of generating.

One of the main reasons for this is underutilization is worries about privacy. The analysis of data may reveal patterns, behaviours, health risks and so on concerning individuals or groups. The disclosure of such information can have adverse consequences for them. For example, it can be used to discriminate against them or stigmatize them. Protecting privacy interests, such as keeping their health information confidential, shields them from these risks. On the other hand, making this information accessible to third parties is necessary for many scientific advances, and it can further people’s interests to share in, and benefit from, such advances.

One way of easing some of the tension here is by asking individuals if they are willing to undertake a privacy risk in order to contribute to the advance of scientific knowledge. This is typically done through informed consent procedures and the authorization of one’s health information for secondary uses. A second way of easing the tension is through anonymization of data, which prevents data being associated with particular individuals.

Unfortunately, both of these approaches are of limited utility. It is often impractical to seek consent every time a research question emerges, while a very broad consent for general data uses is not morally robust enough to cover quite unanticipated uses. Meanwhile, to allow people to opt-out by not consenting stands in the way of valuable scientific research – since significant opt-outs introduces a risk of selection bias. Anonymization, on the other hand, may be undesirable depending on the research project, and in any case re-identification may soon become possible as new capabilities or computational methods emerge.

A more radical, and promising, way forward I think is to ask whether the right to privacy is to be specified in a way that it actually poses significant obstacles to the right to science.

Consider, then, the right to privacy in health research. The interests that it serves, for example, the interests in non-discrimination and non-stigmatization, do not necessarily have to be served by conferring on the right-holder exclusive control over the flow of data.

Another way to protect such interests is by shifting the focus onto the conditions under which certain uses of data are to be regarded as permissible. Here are some conditions that could permit the use of data without prior authorization:

  • data-driven research (like all other research) must be socially valuable and its benefits ought to be shared fairly among the community;
  • data users should not subject data to queries that create excessive risks, and if they do, the information gleaned should not be released to any parties that might use it to harm the person in question;
  • data users commit to full transparency about data uses and related actions;
  • one’s interests in not being harmed through discrimination maybe be better protected by regulation that punishes discrimination, or through other means that serve as deterrent to discriminatory activity, including compensation mechanisms.

This solution considers privacy interests only insofar as they concern certain harms – such as discrimination – resulting from privacy breaches. But we can easily imagine an objection here.

Someone might object: even if there is no harm of stigmatization or discrimination resulting from privacy loss, doesn’t the person have an autonomy interest in others not accessing his personal information without their authorization? And, isn’t it plausible that this autonomy interest is protected by the right to privacy, so we have a duty to use the information only with explicit permission?
I believe it’s doubtful to suppose this is always the case. Even if one has an interest in not having their information ever accessed without permission, it may not always generate a duty to respect that interest. The distinction between the right and the interest it protects is all-important. Are we really to say that the autonomy interest is of such surpassing value that there is a duty to respect it no matter how great the potential gains to be derived through scientific research? This seems to me implausible.
Of course, this still leaves us with the moral question of deciding when the potential gains of research are so great that there is no countervailing obligation to give people a veto on the use of their data. It is also a political question, because it cannot be answered entirely in the philosopher’s study, but needs to become the subject-matter of a democratic decision-making process. We are back now to that other aspect of individual participation: in democratic politics.

Though only a sketch of a proposal, this suggests the broad direction in which, I believe, our thinking about human rights and Big Data needs to move.


The emergence of Big Data is a dramatic example of how scientific and technological innovation generates both enormous potential benefits and grave risks. In responding to the challenge of securing the benefits while minimizing the risks, we have to engage in ethical thinking that is just as innovative as thinking in science and technology.

In particular, we should not confront these developments with a dogmatic understanding of human rights that is fixed and unresponsive to changing circumstances. Instead, we have to think creatively about how the content of human rights – such as the rights to privacy and science – might need to be revised to meet the challenges of this Fourth Industrial Revolution.

John Tasioulas is the Director of the Yeoh Tiong Lay Centre for Politics, Philosophy and Law at King’s College, London. His Van Hasselt Lecture is based on an article co-written with Effy Vayena, “The Dynamics of Big Data and Human Rights: The Case of Scientific Research.” He thanks Professor Vayena for permission to use this material.

Professor Tasioulas discussed the relationship between human rights and moral obligations with Scott Stephens earlier this year on The Philosopher’s Zone on RN.