Edited from a panel discussion on Cyborg Liability moderated by Rita Heimes, Research Director at IAPP at RSA 2016.
Rita Heimes: Let's say I'm counseling a technology company, and I help them write an end user license agreement that basically says, "If you're online for anything... and you don't own a copyright ... and all these other good things... you don't have a choice but to click agree or not use this device. By the way, if there is any dispute, it has to go to arbitration, binding arbitration, and we have control of that process." It's the lawyer's job to help the client, avoid liability, and get an eager user to sign off on all these things. Do you think that is an appropriate way to accept risk for consumers? Or is the legislative process a better way to judge what people really think about technology and innovation? They look to the government to help them in these circumstances. We just basically figure, ah, it'll work out in the end. I can always hire a tort lawyer. What do you think society is ... who's winning in this risk game?
Jessica Long: One of the things that we've been really looking into in this data ethics project, is informed consent. I think this is really relevant here, because the things that we're using in our daily lives, and that we honestly rely on, most of us, especially in American cities, we don't understand how they work. Really, most people don't understand how those things work. They have no concept of what the risk is. If they do, they have concept of one little tiny piece of it, or a piece that got blown up in the media and was explained because there were problems.
The idea of, it's your own responsibility to know what risks you're taking ... The idea that there could even be consent for a lot of these things is kind of exploding. I think that to continue along those lines of personal responsibility for the technology we use is actually pretty irresponsible at this point. At the same time, a lot of the companies that are producing these things ... They don't fully understand all the pieces and risks involved either. To me, it seems as though there really needs to be greater interdisciplinary connection and transparency and education. Companies need to be sharing information about stuff with each other and with users, which brings a whole other legal consideration about what to keep and how much are we really sharing our secrets with each other, in order to get the information that all of the parties need to have, so that they're all being responsible, about what they're asking each other to do.
MJ Petroni: I think part of this is also looking at how we manage informed consent in an even more effective and respectful way that meets people where they're at. One user might completely understand why sharing data with an app is necessary and how that might be used or not, and how safe that is or isn't. Other users would really have no idea what that is. Having some way to have a basic level of digital literacy, but also to see the impact of the data that's shared, along with a data supply chain, would be helpful. Look at the more proactive measures taken by some of the social networks out there, and some of the application frameworks out there. A lot of times, they prompt developers who are trying to get access to the core platform data to provide an explanation of why that data needs to be accessed, and they reaffirm consent consistently.
If you share location data from an app on your iPhone, and then you haven't used that app for awhile, and it's just accessing your location data in the background, your phone will proactively remind you, "Hey, did you know you set up background location sharing with this application and that it knows your location? It says it's using it for this, do you want to keep allowing it?" That's a great best practice. It's not legally required by Apple, but it kind of meets a middle ground there.
Jessica Long: I completely agree. The EULA (End User License Agreement) phenomenon is a great example, which I've just finished doing extensive research on. One of the things that was interesting that I discovered, there have been several design projects done at major universities basically design hacking the EULA model and circumventing the entire process. One is a design suggestion that came out of Carnegie Mellon, presenting what a visual EULA could look like. There is also a pretty fantastic website where you can actually enter the URL of any EULA. It will load it into a flash framework where it still has the regular print of the EULA, and on the side it has a menu that gives you the one line paraphrase of what each clause means while the actual clause is highlighted in the original piece.
You can go all the way down it and it has color coding for different types of clauses. It's really fun to play with actually. Those projects really show to me that it's such a problem that they're actually going all the way around the legal establishment entirely. Just design students and ethics students at universities are coming up with hacks for it that are directly accessible to the users.
MJ Petroni: I think Creative Commons is another great example where there's a standardized contract that people who care can opt into, and pay attention to, and those who don't care about it don't have to pay a lot of attention to it. Because it's a rubric and people can adopt it and build upon it.
I sat across from a lawyer one time who was negotiating a contract on behalf of one of my clients. My client's sitting here, I'm sitting here, and I texted the lawyer on the back channel and I said, "Would you sign this contract? Does this even make sense to you, given the context of what we're trying to do?" Then he replies, "Are you actually asking me this? Ok, no. It doesn't. You have a point."
There's not a way to do that online when you have a standardized contract. What I love about Creative Commons is that by providing this rubric, it allows people to engage in some self advocacy without having to reinterpret a contract that they want. We see this in the frameworks of privacy that are embedded in Facebook, or Apple's user interface where people have common sense explanations for what their choices are.
We need to create something like that for software and tech services' part of the contract handling the most standardized part of how data is shared, and how many degrees out it’s shared, and if it's going to be transformed. And handling if it's actually realistically going to be anonymized, which we think is likely not possible anymore. Allowing people to be able to see that at a glance and only have to educate themselves once on the rubric would be great.