---------- Forwarded message ----------
From: Melvin Carvalho <address@hidden>
Date: 5. März 2010 17:24
Subject: Summary of SWXG Telecon With TimBL 3/3/2010
, Tim Berners-Lee <address@hidden
I was asked by Harry/MHausenblas to give a summary of the topics covered during this week's telecon, with Tim Berners-Lee. I will attempt do so below.
The Minutes for the Telecon are here:
Slides and design notes (design notes are a work in progress) are here:http://www.w3.org/2010/Talks/0303-socialcloud-tbl/
It's would be impossible for me to come anywhere close, to the eloquence contained in the links above, however I will try and give a brief summary, annotated by some of the more interesting points that came up.
The concept of linked data is referred to throughout, so you may want to familiarize yourself with the "4 Rules of Linked Data", before reading futher.
1. Use URIs as names for things
2. Use HTTP URIs so that people can look up those names.
3. When someone looks up a URI, provide useful information, using the standards (RDF, SPARQL)
4. Include links to other URIs. so that they can discover more things.
Tim's Pearls of Wisdom
Starting with the big picture. Social networks are doing well. However, after some time there tends to be a limitation with the walled garden approach. A user in one silo expresses frustration, when they try access friends and photos on another silo, but are unable to do so, due to a lack of openness.
An illustration of this point was twitter, and how you can make connections within twitter but not outside. identi.ca
has more potential in this problem, because you can connect to other people running the same software, for example someone on tweet.ie
can talk to someone on status.net
. A better solution is to allow APIs for servers, on different sites, to exchange tweets with each other.
However, APIs are fundamentally poor, for the reason that they hide the underlying data. Every time you add a new piece of data you need a new API, and this is not scalable. A better solution would be to expose the underlying data, make it browsable and allow it to be consumed in new and interesting ways.
One important aspect of this is to determine who gets access to what. You need a system of access control, to keep data private and safe, as necessary.
An expansion of this, would be having of your own trusted applications. For example you have something on your desktop that safely stores your credentials, something like a tweedeck or a tabluator. This can then let me log in to my account and allow me to access my data, and the other data stored by me. It's easy to over complicate an ACL system, so we would want to start of with something simple, modeled on the UNIX system that has been tried and tested for several decades (ie read write control -- in this case control means you can change the ACL itself)
The sanest way to design access control is to use the same infrastructure for this data, as you do for linked data itself (ie give it a URI and allow linking). After all, access control is simply another type of data. And we already have the technology for working with and editing data SPARQL (Update), WebDAV etc.
user = weblogin();
name = user.foaf.name
is the kind of simplicity that we would aspire to. For authentication FOAF+SSL is currently the preferred technique, since it links to your FOAF, and therefore your linked data, but other methods are also possible for authentication, and there are a number of solutions.
Similarly Groups are possible to do using the same infrastructure as Linked Data. The key is to give the data global scope.
As an aside, one very interesting point is what happens in computer science when you make variables global. When you do it with some programming languages, they die. When you do it with Hypertext, you get the Web. It turns on its head many design models, but using a top down approach, rather than bottom up. But it's the only way you can scale something with the Web.
The mathematical challenges of a distributed model are greater than that of building centralized nodes, since you need to send out a message to each set connections you have, for each operation. However, it is worth doing as the benefits outweigh the shortcomings.
The potential is that it creates a whole new app market. The app market for phones is currently quite vibrant. Why shouldnt we have a (data aware) app market for your computer, as well? There is the potential for one app to come along and be a game changer, or for several small apps to make incremental change. One advantage of apps of this kind, is that they are almost by definition inter-operable, and cross application synergies may emmerge, that are beneficial and were previously unanticipated. This should lead to a virtuous cycle of renewed innovation.
One question was related to FOAF+SSL WebIDs and how it related to other technologies such as OpenID, and how it would work in internet cafes. The answer was that when you start from a position of strong authentication (SSL) it's relatively easy to downgrade your system to something like username/password. Internet cafes are a hairy problem in every case since you dont know what is installed on that machine. OpenID is a useful and usable pattern that has a lot of traction, but it would be ideal if your OpenID was able to connect to your linked data.
The next question was a critique of RDF as being overcomplicated, why are you forcing this complex infrastucture on me, when I only want a string? The answer to this is that we already have several bodies of data out there in the cloud. There's the FOAF cloud, linked data cloud, the web services cloud. Being inter-operable with each part of the cloud is part of the overhead, but generally if you can do web services, you can do linked data. One of the keys is to make a great RDF API that is very simple to use, thereby reducing the complexity for the average user.
The final question related to the role of the W3C, a new structure and participation with other groups. Tim said that the W3C is currently in the search for a new CEO, therefore, it would be inappropriate to make sweeping changes before completing that process. However, it would be highly appropriate, to think about what changes we'd like to see, so that they are ready and on the table, at the start of that tenure. Tim strongly encouraged liaison with other groups, by inviting participation on calls, inviting people to TPAC, organizing unconfernces, co-sponsoring workshops and engaging the thought leaders. (Ed: I think Harry & the other chairs has done a great job on this so far, much thanks is due!). The W3C would like to be more of a place that can make world class specifications, but also one that can iterate quickly, and liaise with other groups.
Tim has presented a far reaching explanation of how social data can be inter operable across the whole web. The paradigm that scales is the same paradigm as the Web itself, which means using linked data as your standard, and letting the rest follow. Engagement with other willing parties should enable the creation of a new Web, a data Web, a read/write Web, which possibly will have even greater potential than his original creation.
For further reading on the work being done in Tim's group at MIT, please browse recent papers, I found the following a particularly good background on today's presentation.http://dig.csail.mit.edu/2009/Papers/ISWC/rdf-access-control/paper.pdf