If You’re interested you can watch the whole thing here:
The main thing I tried to make was that cultivating the growth of blockchain and cryptonetworks is in fact a critical strategy here. Frequent readers will know that I do not shut up about this, and I held to this on the board. This stage is painfully absent in many discussions about market power, competition and antitrust in the technology industry, and that I will always attempt to insert that into the conversation.
To me, blockchains & crypto would be the best”crime” when it comes to rivalry in the technology industry. Historically, breakthroughs in technology competition have included a crime component along with a defense element (note that the below just focuses on computing, not on telecom):
The”defense” side has generally included a separation (US vs. AT&T) or some sort of forced openness. Examples of forced openness comprise the Hush-a-phone and Carterfone decisions that compelled openness upon AT&T. Several decades later were the (continuing ) battles over Net Neutrality using the ISPs. The discussion about information portability and interoperability brings the very same questions to the software / data layer.
Data portability & interoperability are important for 2 reasons: 1/ since they concentrate on a significant source of market power in the technology industry, which is control of information (“divide the information, not the companies”), and two / since they represent a class of regulatory interventions which are equally as easy for smaller organizations to implement as big ones, unlike heavy approaches such as GDPR that are simple for big organizations to implement but hard on startups.
Having said that, when you dig into the problem of data portability, there are a number of difficult problems to solve. I don’t think they are insurmountable, but I also think they have not been solved as of yet.
For instance, data portability is the notion that a user of a technician service (e.g., Google, Facebook, Twitter, etc) need to be able to easily take their information together and transfer it to a rival agency, if they so choose. This is similar to the way you can port your phone number from one carrier to another, or how in the UK you can port your banking info from one establishment to another. Both these cases required legislative intervention, with an eye towards increasing competition.
Where it gets more complicated is when you start considering what information should be mobile, and whose information.
By way of instance, within tech businesses there are usually three types of information: 1/ user-submitted data (e.g., photographs, messages that you post), 2/ observed data (e.g., search history or location history), and 3/ inferred data (inferences that the stage makes about you based on #1 and #2 — e.g., Nick enjoys ice skating). Broadly , I feel that almost all type #1 and type #2 information should be mobile, but almost all type #3 probably shouldn’t.
To increase the complication is the question of if”your” data also includes data from other people — for instance, messages somebody else sent me, photographs where I had been labeled, contact lists, etc.. This was at the center of the Cambridge Analytica scandal, where individual users exporting their own data to a third party program actually exposed the information of a lot more individuals, unwittingly.
I’d like to concentrate here on the next category of complications — how to manage data from other folks, and privacy more generally, when considering portability. This is a real problem that deserves a real solution.
When you send me an email, you’re trusting me (the receiver ) to safeguard that email, rather than print it, or upload it to some other app that does sketchy things with it. You do not really care (or even know) if I read my email in Gmail or at Apple Mail, and you do not generally think about these companies’ effect on your privacy expectations. Whereas, when you print to a social web platform, you’re trusting both the end receiver of your content, in addition to the platform itself. For instance, if you send me messages on Snapchat, you hope that they will be private to me and will disappear after a certain period of time. If I”ported” these messages to another program, where, say, they were public and permanent, it would feel as a breach — both by me the receiver and by Snap the stage. Interoperability / portability would alter that expectation, because the social platform would no more have complete management (more like email). User expectations would have to be reset, and new standards established.
Secondly, porting the”solitude circumstance”: Given platform expectations described above, users have a feeling of what solitude context they’re publishing into. A tweet, a message to a personal group, an immediate message, a snap message, all have different privacy contexts, handled by the platform. Could this circumstance be”ported” too? I could envision a”solitude manifest” that ships alongside any flashed data, like this:
Within this model, we might have a flexible set of privacy rules that could even conceivably consist of specific users that could and couldn’t see specific data, and for how long. This would likely require the development of some type of federated or shared identity criteria for recognizing users across networks & platforms. TrustLayers also functions in this way.
Third, accountability transfer: Assuming both aforementioned concepts, we would probably need a liability regime in which the sending/porting firm is discharged from liability and the getting company/app assumes liability (all, of course, according to an initial authorization from an individual ). This seems especially important, and is linked to the notion of criteria and expectations. If information is passed from Company A to Company B at the management of User C, Company A is simply likely to feel comfortable with the move if they know they will not be held responsible for the activities of Company B. And this is only possible if Company B is held accountable for respecting the privacy context as expressed through the privacy attest. This is somewhat like the idea of”data controller” and”data processor” in GDPR, but recognizing that a”handoff” in the direction of the user breaks the accountability linkage.
Those are a few thoughts! Tough stuff, but I think it is going to be solvable ultimately.