For those not overly familiar with W3C activities – a workshop is usually an early event in W3C activities. If a topic seems to gather a reasonable amount of interest, a workshop is held and interested groups (whether W3C members or not) present their positions. Usually there is some discussion afterwards on whether this is something that W3C should get involved in (i.e. whether this is something where standardization makes sense). If that is the case, then either the charter of an existing group gets extended or a new group (interest group, business group, working group) gets established.
So a workshop is not directly a standardization activity and also not necessarily aimed towards a common result or agreement, but more a statement of the status quo and potential issues.
Usually workshop minuting is done ‘live’ via IRC, which is publicly available, so I won’t go into presentation details (presentations will be linked from the workshop agenda http://www.w3.org/2014/privacyws/agenda.html), but just summarize and point out possible relevance for UCN.
The first day presentations were primarily about various privacy displays in browsers / OS. Mozilla presented the ‘Lightbeam’ plug-in (which displays tracking cookies in Firefox in a nice graphical representation), Opera showed their mobile browser privacy dashboard. There was also a presentation of an early (textual) privacy dashboard that Dave Raggett die internally for W3C, an ‘app privacy meter’ that gave a general ‘app privacy invasion’ index from the Android permissions an app requests and also Firefox OS privacy features (which also include some features to avoid giving out detailed information in the first place, for example by allowing the user to specify a granularity of location information provided).
All speakers essentially agreed that this is a useful indicator for the user and something that could and should be standardized, as it is confusing to the user to have a different privacy rating and display system on every devices/browser/OS. As privacy display is not really a competition factor, this was also seen as something where standardization was desired, as no browser company would gain a competitive advantage based on it.
One statement that was made a couple of time was that showing privacy information would have in general no effect on the user. In almost every case, a warning will be shown at the moment where a user want to achieve a goal (wants to install the software to do something or to access a specific web page) and it has been shown that users will prefer to reach the short term goal and long term concerns (such as intrusions to privacy) will be ignored, if privacy indicators are read/noticed at all.
One presentation was slightly different – it was mostly about possible risks of eye-tracking on tablets/phones. Eye-trackers used to be primarily in the lab for UI tests and marketing campaigns, but the technology (basically an IR pattern projector and an IR sensitive camera) could be reasonably shrunk to be included in a handheld device. This would offer some interesting new UI capabilities (like hands-free scrolling or feature magnification), but also introduce new possible privacy concerns.
Three points that were made in one form or the other by multiple speakers:
It is difficult to make users aware of the implications of individual privacy parameters – there seems to be always the need to summarize that into a single (hopefully meaningful) value. Preferably, this value takes contradictory information into account and tries to achieve a ‘reasonable’ amount of alerts to avoid that the user just clicks away every warning automatically. An example given was that certificates sometimes just expire because someone just forgot to renew them. If the browser determines that a user has used a page in the past a couple of times and that associated information (IP, route, …) has not changed, then it might determine that no malicious use is intended and not give the user a ‘certificate is invalid’ pop-up when accessing that web site.
Permissions are currently on different levels and somewhat mixed. Example:
- Request access to microphone
- Request to location
- Request a screenshot
The first case requests permission for a device, the second to a function (which can be fulfilled by different devices in the phone) and the third one to a specific data item (which will be selected by the user and can be inspected before being uploaded). Information to the user should usually be given on a functional level (e.g. instead of ‘the app wants to access the microphone’ use ‘the app wants to record your conversation’) and, wherever possible, should provide information which data will be uploaded.
Mandatory explanations for permission requests by app or web page providers (such as ‘app xyz wants to access your contact list for navigation purposes to facilitate direct guidance to people in that list. It will never be uploaded to our servers.’) were generally seen as useless. Repeated statement: « Legitimate services will state the obvious and malicious services will just lie. »
Another session block in the workshop was primarily concerned with the underlying legal issues of privacy.
This was more about EULA / Terms of Service statements (that nobody read) and possible advantages of mandating specific text elements from which they are composed. The basic idea behind that is that ‘unusual’ conditions (for a specific kind of service) can be easier spotted and flagged. There are some tools that perform similar things in other legal areas (for example, there is a tool that checks freelancer contracts and highlights unfavorable terms), which might be adaptable for EULA/ToS texts as well. There was also a presentation regarding « Terms of Service; Didn’t Read » (https://tosdr.org/) an initiative that intends to crowd-source the reading of contracts and give a simple A to E rating. Currently not much content (only 11 ToSs are actually rated with further 55 awaiting classification), but the basic idea might be applicable to privacy settings for applications as well.
Included in the ‘Social & Legal’ Session were also the « GSMA Privacy Guidelines », (available here). Good recommendations for companies that want to be ‘good net citizens’, but hard to enforce / standardize. There was, however, a good slide about the various worldwide privacy regulations and guidelines.
(Not from the W3C workshop, but same slide here Slide 5 of 6. Clearly something that might come in useful for UCN.)
Small side node: Unofficial remark from GSMA representative was that the general approach for companies in North America would be to just follow the EU guidelines, as these tend to be the strictest one and everyone who follows these will be fine for almost all other guidelines in the world. Makes a good point for EU initiatives (like UCN!)
Another important thing from a legal point of view was ‘accountability’. The canonical example here was buying some food in the supermarket.
If you buy some food, you don’t read the terms of conditions and you don’t click to agree to the End-User-Food-Agreement. You just buy the stuff. And if something goes wrong, you know that you can go back to the supermarket and complain and that (usually) the problem will be taken care of and handled satisfactorily.
And it would be desirable to have a similar legal situation on the web. If something goes wrong, you can complain and (legally backed up, but often not necessary), the provider will attempt to provide a solution. Not the scope of W3C to provide ‘accountability’ on the web, but generally seen as something that would reduce the ‘sting’ of the privacy discussion.
Final session of the workshop « New Architectures » and could have been as well labeled ‘Oddities’. The point here was mainly to ensure anonymity and secrecy on the net for those who crave it, including complete end-to-end encryption of everything everywhere, up to a fully anonymous net infrastructure based on public key based package routing and removing IP all together. (Essentially taking TOR to the extreme and using it and associated features as the building blocks of the net.)
Not necessary relevant to UCN and also slightly off in regard to the other presentations (as this final presentation block was mainly about allowing users to communicate in secret and anonymity in pairs or small groups, while the rest was more about the cases where the user wants to access a service and then gives away some privacy to allow the service to operate. If the user allows an app to access the location and the contact details to a service provider, then end-to-end encryption and non-traceable networks will have no effect on that…)
So far the summary of the workshop.
Basic result was mainly that ‘privacy dashboards’ are something that apps and operating systems should have and that their ‘privacy assessment measurements’ should be somewhat standardized. Even though this will probably be ignored by most of the users anyway, it is something worth doing.
Any participants in the workshop were strongly urged to participate in the W3C PING (Privacy Interest Group).