[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Social-discuss] A User Perspective of GNU Social

From: Nathan
Subject: Re: [Social-discuss] A User Perspective of GNU Social
Date: Mon, 03 May 2010 13:59:35 +0100
User-agent: Thunderbird (Windows/20100228)

Melvin Carvalho wrote:
> Nice post, it covers a lot, I'm not yet sure if it covers everything, but
> very good for some brainstorming.  As an exercise I'll see if linked data
> can cover all these use case, with maybe examples.

concurred, a very good read - comments inline with notes on both the
original and Melvin's linked data use-case exercise.

> 2010/5/2 Hellekin O. Wolf <address@hidden>
>> A User Perspective of GNU Social
>> This text follows a brainstorming session that occurred yesterday at a
>> park nearby in Amsterdam, with elinvi and psy, from
>> It aims at providing a non-technical view of an idealized GNU Social
>> application from the point of view of a sample user, in order to
>> broaden the reflexion on GNU Social beyond self-promotion of various
>> projects, and out of a purely technical scope.  You need to read it
>> once entirely before replying, otherwise you might get lost into
>> details where the idea is to provide a draft of a big picture.
>> 1.0 - Universal Access
>> 1.1 - Device Independent
>> My account should be accessible via any device I happen to use: my
>> personal computer, a public (insecure) computer, a telephone, etc.
> Let's say your account is a FOAF.  It's immediately accessible to anything
> that has HTTP (which is a very broad spectrum).  If you dont have HTTP
> anything that has HTTP can relay it to you (brpader still).

with regards foaf+ssl specifically, the issues of "a public (insecure)
computer" / any form of temporary access still needs worked on a bit,
however multiple options are available to cover this.

>> 1.2 - Platform Independent
>> Whether it is from a GNU/Linux OS, an Android, a Mac OS or a Windows,
>> I should be able to access my account.
> Again all these have HTTP access.  I tend to think of HTTP as a universal
> API.


>> 1.3 - Trust-Based Access Restrictions
>> When I'm using my personal computer, I expect to have the optimal
>> security features: I know that I'm not spying on my own keystrokes,
>> that I have my personal GPG key or SSL certificate locally and I can
>> trust them. Hence, in that case, I get full access to all my
>> functionalities and data.
> FOAF+SSL can do this

there are use-cases where you'd want an additional layer of security
such as a password only you know for certain resources.

further, when dealing with foaf+ssl then any web app can make a client
side request to https and access controlled data on your behalf (via js)
then use said data - here an additional layer of security / notification
/ user acceptance will be needed.

>> But if I'm connecting from a public computer, I cannot give it the
>> same trust: I'm not using my secret keys there, nor do I know if the
>> computer is logging my keystrokes. In that case, I expect the
>> application to ask me for a password, or an out-of-band challenge to
>> grant me potentially harmful functionality (changing password) or data
>> (whole contact list, personal history, etc.).
> I think this can be solved, but we need to go through the details.

personally I can't see any way other than using a trusted service to
give you temporary access to your details, the details of just how that
would work I'm unsure of (again, many possibilities but which is "best").

>> In that case, I expect the application to grant me a one-time access
>> to the account, maybe using OTP or similar one-time authentication
>> mechanism, that would at least ensure backward and forward secrecy for
>> that account.
> Right.  I need to read up on OTP I think :)

it'll definitely need to be some form of "one time password" - perhaps
"temporary access" is more accurate though, as it may not be a password,
it could be a temporary certificate, temporary set of pgp keys, or similar.

on this thought, the oauth (and oauth2) flow seems to make sense and a
similar pattern could be used in whatever solution; the trusted server
giving access to the details could be [anything, sms server etc]

whatever the solution, preference would be to something that wouldn't be
subject to brute force or DoS attacks.

>> 2.0 - Seamless Contact List
>> 2.1 - Protocol Independent
>> When I want to send a message to my Mom, I don't care if she's using
>> Facebook or XMPP or IRC or email or her phone. Although it might make
>> sense technically to know what service is used, the user just doesn't
>> want to know. The application should hide all that and provide a
>> seamless contact list.
> Agree, but if you dont have HTTP you need to relay it to the other
> protocols.  There is often significant build time associated with bridges /
> interfacing.  One day it will probably all come together though.
> Perhaps we need a protocol 'layer cake' with HTTP at the bottom, and XMPP
> above it etc.

Personally, I believe this is solved already and just needs closer
analysis. If you look at google, via gmail and related they have
effectively created an http realtime messaging service that only uses
the http layer, but is fully backwards compatible (and seamlessly
integrated) with email, xmpp, and others - they've also recently
supplied OAuth access to IMAP/SMPT [1] to pretty much complete the loop.

What they haven't yet done (afaict) is accounted for making the http
layer restful and stateless, or indeed opened up how they are doing it -
quite sure that some careful analysis of google's approach would bolster
this effort along somewhat though :)


>> 2.2 - Support Free
>> So, if I have "Mom" in my contact list, she would have an email
>> account, an XMPP account and a phone number. Even a snail mail address
>> could work, provided the application is hooked up to a postal mail
>> delivery service.
> That's fine, it's all in your FOAF.

hmm.. it should probably all be in Mom's FOAF, her personal details
would probably have to be stored outside of the main FOAF profile /
document, and further be subject to access control.

Mom may not be happy with me giving her address and phone number to the
web population :p

other way around: it could be in your own personal address book, using
the FOAF or vcard or other vocabulary and probably web accessible but
access controlled.

personally identifiable information within FOAF files, either your own
or others is something that needs analysed pretty heavily imho.

>> Imagine I want to send her a message on that video I made last
>> evening. It comes with a comment, and the video file attached. When I
>> hit "send", the system can match my preferences for that contact
>> (rather xmpp than sms, rather html mail than text, etc.), the
>> perceived urgency for that message (it's urgent, I need her to approve
>> it before i can propagate it to the rest of the family), and according
>> to my contact's delivery settings (as I'm her son, my messages are
>> doubled to email and SMS, but as she's hiking in the mountains, only
>> email delivery is available at that point).
> This is a very good use of data exchange.  I can look up my FOAF, and see
> what possibilities i have to send, then use my rule based system to use the
> one I want.  I may not even have to interact with my computer for this
> decision to be made.  RIF, N3 or programmed 'handlers' can do this.

Whilst this is great, I honestly feel that this is an inverted process -
how much easier would it be if each person had a single id (webid) and
messages were sent to that, with a priority flag - then the receivers
system handled all the methods, priority, routing, delivery etc.

The above would also put users in full control of their own data, would
mean they could remain hidden, choose their own routing priorities, and
also when one changes their details the world doesn't need to find out
and change stuff.

In short, hide all this behind a universal API, define the constraints
of the mediatype / message format, then let the application do the work.

>> And the message is sent through the different media, according to
>> simple rules: the message being too long for SMS, the title is sent
>> along with a link to the rest of the message, including the video. The
>> email receives the whole thing, except the video, 240MB, is not
>> attached, but linked. Etc.
> As above.
>> 2.3 - Synchronous And Asynchronous
>> My contact list should cover both synchronous (e.g. chat) and
>> asynchronous (e.g. email) contacts, with easy merge capability (the
>> machine might not know how to recognize Hellekin from HK or HOW or
>> Hellekin O. Wolf, but the user will know and make the link. So, the
>> contact list would include a unique identifier across the network
>> (what PSYC calls the Uniform Name Location AKA UNL, or UNI, or
>> Uniform), the different associated endpoints (online services, IRC,
>> XMPP, PSYC, email, phone, etc) including sending/receiving rules for
>> that contact, with sensible defaults (huge attachments stripped from
>> emails, no attachment to the phone, "preferred" mean of contact, etc.)
> UNI seems to be reinventing a little.  Why not use URIs which are already
> established, instead of a new system that might take a decade to reach the
> same penetration?
> There's a number of ways to communicate
> - Push
> - Pull
> - Full duplex
> - Streaming
> - Syncing (e.g. dropbox)
> - Notifcations based (e.g. I send you a UDP packet as notification and you
> come and collect data from me)

partially agree (definitely URIs) - however I do feel that if we were to
talk the approach of standardising a simple HTTP messaging protocol then
this would allow the world at large to embrace a single unified
messaging protocol and also allow implementation details for backwards
compatibility to remain in the hands of the faster proprietary and open
source communities.

Get everybody speaking the same language (which they already speak in
>99% cases) then let translators to the heavy lifting from there.

>> 3.0 - Seamless Data
>> 3.1 Local & Remote Are Obsolete
>> I don't want to "synchronize" my bookmarks. Instead, I want to access
>> them all at once, from Delicious or from my local browser, from my FTP
>> server and that other app where I share bookmarks with my friends.

again HTTP the universal api seems to come in to play here, a service
which then connects through to ftp or whatever to get the files could
remain an implementation specific detail, hidden behind a URI and simple
api - again though, it would be greatly simplified to have a single
server to server, or service to service api (http) in the middle, with
the end goal of a single method, but allowing backwards compatibility to
legacy / other protocols, and simple migration to the new way of doing

>> I don't want to "upload" or "download" files. Instead, I want to be
>> able to select files on my local computer and drag'n'drop them to my
>> chat window so that a torrent is automagically created and shared
>> among that group.
> See above

ideally it would be push a notification of a new file over, then pull
that file at the users discretion, and possibly via proxy methods - thus
saving much bandwidth and allowing proxied methods to get the download
the file back.

>> 3.2 - Raw Data vs. File Formats
>> People don't care about the file format, it's a techie issue. What we
>> want is seamless integration of raw data. If I stumble upon a text
>> online, I want to be able to select part of it, include it in some
>> "box" and share that box with others. For example, that could take the
>> form of an RDF description of all the sources used to compose that new
>> document. But at this point, from the user perspective, the technical
>> implementation doesn't make sense.
>> That approach breaks free from a paradigm that has been dominating the
>> computing world so far, that exposes the data type, and especially the
>> file format, which is completely irrelevant to the user: she doesn't
>> deal with MP3 or OGG, with JPEG or PNG, with MKV or DIVX, but with
>> sound, images and video.
>> I think the current approach to exposing technical details to the user
>> inherits from the legacy of proprietary software, where a proprietary
>> format appears as a brand, a differentiator on the market. When
>> dealing with free software, the file format is only a technical
>> fact/constraint, and does not bring any value-added to the user.
> Right idea.  It's all about top-down data design.  I send you the data, and
> you work out what it is, and what to do with it.  APIs are bottom up data
> driven design, which obviously limit freedom to the functions it allows.
> There's a class of interaction that are universal APIs ... HTTP / RDF falls
> into this category, OData, GData and Core Data all are trying to explore the
> same space.

agreed - and any disagreement would mean I was saying tim berners-lee,
(turing, godel, fielding), and the best brains at microsoft, google and
apple were all wrong :p

>> 4.0 - Memory, Intimacy, Privacy
>> 4.1 - The Social Network as Extended Memory
>> Within the vision of McLuhan that tools are extensions of the human
>> (e.g. a hammer is an extension of the hand, a shoe an extension of the
>> foot), and John Licklider's (and others) view of the computer as a
>> mind-amplifier tool, we can consider the computer as an extension of
>> the mind. It helps us keep track of a lot of details that our memory
>> would filter out, such as precise dates, re-occurrences of events, etc.
>> One of the most private things is memory. Humans have a right to keep
>> that to themselves, and in fact, what's on your mind is inaccessible
>> to anyone else unless you chose to share it.
>> The explosion of social networking makes available a lot of that to
>> other people, including services that you're using to distribute your
>> private data to your friends. Until now, the drive to share has got
>> the priority over the drive to keep things private: the tools provided
>> makes it easy to share, and most social networking services rely on
>> the possibility to aggregate data and filter it to expose patterns,
>> and create detailed profiles of a person's behavior, that has a lot of
>> value for marketers (and intelligence agencies).
> Lots of work being done on provenance, but it's a young area.

indeed, W3C provenance working group seem to be making good ground on
these matters :)

>> 4.2 - The Right To, and Necessity of Intimacy
>> Most people don't care too much about privacy, as they're told that if
>> they don't do anything wrong, they don't have anything to fear or
>> hide, and that if you have something to hide, it's probably because
>> it's wrong. Of course, this is a fallacious argument. If you look at
>> it closely, you'll find out that the people promoting transparency of
>> your data are the first ones to use secrecy. Transparency of public
>> and market data is important, respectively, for democracy and fair
>> competition. But opacity of private data not only protects the
>> citizens from abusive governments, but also proceed from a natural
>> need for privacy and intimacy (think about toilets.)

per resource access control is key here, the web till now seems to take
a very much all or nothing approach, access nothing, access everything
if authenticated - granular access needs worked back in to the web.

>> 4.3 - Building Memory for the Future
>> When you don't have control over your data, you take the risk of
>> losing your intimacy, as well as your memory. The time passed in front
>> of a computer, or online, is growing. It's important to realize that
>> for many, sharing that intimacy online also builds their memory for
>> the future, to share with their grand-kids...
>> That aspect of social networking, that you open the intimacy of your
>> mind to others, should be emphasized.
> Agree with almost everything, but sceptical about he UNI system (though you
> may be right).
> What's missing?
> I'm missing some of the 'linked' data principles.  I send you some data, but
> it should also link to other data to make it that more interesting.  One of
> the most common social activities is link sharing ... mashups and meshups
> become more powerful over time.
> Read / Write web -- I think this is covered by what you post, but not in
> huge detail.   A top down data driven "data wiki" is powerful enough to do
> anything you will ever need.  It's just a case of programming around it.
> Caching / REST seems not to have been covered, but that's not necessarily
> the end of the world
> I think maybe the acid test of a system is that you dont even know you are
> using it.  Most people dont know the difference between the web and the
> internet, 'it just works'.  This is due to some profound architectural
> decisions that put linking and identifiers at the heart of things.  Again,
> it's so simple, you dont even notice it most of the time.
> Conclusion, great post I think you've covered most aspects, and some
> philosophical ones that are very interesting.  I think we can build this all
> with FOAF, and more too.  It's just a case of putting all the pieces
> together in the right order! :)

Great post indeed!, and a good response + early identification from you

>From all my ramblings above, the takeaway would be:

1: Regardless of how you end up implementing things, please keep the
APIs as simple as possible, open them up and make them work over HTTP,
this will allow involvement from the full web community, wide spread
adoption, yields the possibility of a route to being standardised, gives
a chance for us all to speak the same language.

2: Bonus points if you can make it RESTful (as in REST from Roy T
Fielding's dissertation, not "rest" as most see it).

3: Huge bonus points if you can model all data in to be universal,
structured and linked.

Finally - hello all, first post here! :)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]