Technology & ethics meeting in Delft

Monday, September 19, I was in Delft for a meeting with researchers from the Technical University Delft and two Australian researchers. Present were:

Seumas Miller – Australian National University – Centre for Applied Philosophy and Public Ethics; privacy issues
Cathy Flick – Charles Sturt University – ICT industry, codes of ethics, policies, trust, privacy
Jeroen van den Hoven – TU Delft; prof. Ethics of Technology; privacy issues, moral identity
Alper Cugun – MSc student TU Delft; social issues, online communities
Noëmi Manders – PhD researcher TU Delft; ethical aspects of identity management
Bibi van den Berg – PhD EUR; ambient intelligence
Jeroen Timmermans – PhD EUR; playful identities
Michiel de Lange – PhD EUR; playful identities

There were some interesting topics discussed. Read the full notes on the meeting below:

// Jeroen vd Hoven:
Privacy issues & moral identity: the right to write your own biography (Isaya Berlin).
With more and more data, identities are being constructed about people on the basis of database- information.

Popper: 3 worlds
1. someone interrupting and interfering in your autonomous thoughts
2. increasing your self-awareness by ‘looking at you’ – new perspective
3. information gets stored into databases > who will see it? quasi-independent information objects leading life of their own.

Institutions can constrain information (physical, deontic, epistemic) – moral justification:
– preventing harm (cf. Nazis in NL with lists of jews)
– transparancy – personal data has become commodified and is being sold
– information must remain in its original context; it can be used in discriminatory ways if it leakes into other domains.
– One can only write his own moral biography. Other institutions/person shouldn’t have acces to information that could picture your Self. [one cannot know what it is like to be you; persons change and develop their personalities]. Privileged acces only for subject.
We expect modesty in others in judging us and saying “I know you”.
Q: is breach of privacy justified when you suspect someone is doing something evil when your aiming for the truth, but it’s difficult to get at?
Q: What lattitude should people have to keep aspects of their life private?
> people need a ‘time-out’ from morality to experiment with it and develop their owm moral identity.
[individualist perception of identity; atomistic vs. collective image]
> We reveal more of ourselves to invite others to assist us in identity creation.
[presupposing an autonomous kernel that knows itself].
[the individual that constructs itself can do so in language that is not at all useful for other to comprehend – Wittgenstein]
Q: how can you describe the boundaries of what others may know things about you (just like ‘social space’. A: one can descern information as being ‘sensitive’ when it can be used to describe your moral identity; Q: but what is ‘sensitive information’? differs from person to person. Stereotypes can quickly emerge.

// Bibi
ambient intelligence: personalised space.
Q: to what extent can you expect a technology to ‘know you’ and adapt to your personality? Is the Self becoming a set of preferences?
concept of ‘strangeness’ of places: in a globalised world there are a lot of places that are familiar: the same everywhere. No room anymore for the ‘ongemak’ and the unexpected, unmediated experience. Sometimes persons are formed by uncontrollable events.
Q: what if there are many different people with all their own preferences present in one place?
Q: what does collective experience mean when all places are personalised?
Q: when does it turn off? how does it respond to your changing preferences?
Q: Responsability for experiences are being handed over to technologies. Who is responsable for the quality of these experiences?

// Cathy
Trusted computing: chip on motherboard, booting in ‘trusted mode’. Trusted modes are becoming ubiquitous: everyone needs to have a ‘trusted computer’. Large corporations are developing this. Possible problems:
– TC could be used to enforce DRM
– Technical specs change often
– open standards group TCG originally set up has been left by MS
– it can be used also for ‘trusted communication’ between terrorists, MS has not fully incorporated it in Vista.

// Seamus Miller
Collective actions.
individual action – collective action (behaviour/congnitive). How can this be morally significant?
– assertions you don’t have the right to have
joint procedural mechanism: not one person, not collective, but through adding/combining different databases to create profiles > who is responsable?
Q: make distinction between responsability – accountability?

6 Replies to “Technology & ethics meeting in Delft

  1. So the Technorati Watch I set out on my own name paid off. I discovered your blog before I could find the time to send you an e-mail.
    (BTW, I’m an MSc student.)

    Nice to see this, I was going to ask you for these notes but now I can continue to be informed.

    I told Bibi that she should see the EPIC presentations in the light of privacy and recommendation engines but I’m afraid I sent her off in the wrong direction. The correct URL is:

  2. Pingback:
  3. Alper’s post got truncated, for some reason. Here’s the full link:

    “In our gathering about identities, ethics and philosophy some interesting points were discussed.
    I will talk some more about certain specifics from Michiel’s notes.

    For those interested in identities I would like to point out my blogpost discussing the presentation Dick Hardt gave at OSCON about Identity 2.0. This is at a very practical level and talks about identity, privacy and verifiability. The presentation is well worth watching, it is fun and it has a visionary air about it.”

  4. I think my second post got truncated because I posted it as a trackback from my own blog. Anyway, I hope the stuff I’m pointing to is of any interest to you.

  5. Yes, thanks Alper!

    It is a cool speech to watch (if only for its speed and and action!). Dick Hardt points to some interesting issues about identity, namely that identity online is based on reputation; what sites you visit; were you hang out; what others think and say about you, which is more trustworthy than what you say about yourself. Identity in the online world doesn’t mimic the real world in that there is no exchange of reputation across different domains. In the online world “identity 1.0” is based on a direct person-to-site connection. The main problem with online identity is: how can you establish trust in an online environment?

    Of course, identity as discussed by Dick Hardt is a very narrow conception of what it means to be/have a Self. It’s not identity in the philosophical or anthropological sense. Online identity according to Hardt is the aggregate of trust-relations you build up at muliple sites. This view doesn’t take into account elements of identity like change and contingency. What exactly then characterises an online identity in the broad sense? This will be the main focus of the research :).
    BTW: did you notice he gives credit to Lawrence Lessig’s presentation style influence at the end? I’ve seen Lessig’s presentation when Creative Commons Netherlands started in 2004. Back then I worked for Kennisland, one of the initiators of CC-NL. I must say, Lessig’s presentation was quite a rush…

  6. Sorry for the late reply,

    Yeah I did notice. This is a style of presenting which is becoming known as The Lessig Method. It must have been great to see Lessig giving that presentation in real life.

Leave a Reply