A few weeks back, Google released their Social Graph API – a JSON service that gives you access to Google’s information about your online presence. A C# wrapper is available for download at the bottom. 

How it works

This video explains it better then I can:

Try it yourself

For some strange reason, Google has only made the API available in JSON and that sucks if you want to use it from anything else than JavaScript. Since I wanted to use it from C# I built a small wrapper class that exposes all the features from the very simple API. You can try it yourself from the demo social graph page I wrote using the wrapper class. Just enter your blog URL in the input box.

Download

The wrapper is just a single small class that you can dump into your App_Code folder and you are ready to go. The demo web page is also included in the download so you can see how to use it. It’s just a single .aspx page.

socialgraph.zip (3.80 kb)

Lately I’ve been thinking a lot about data portability in terms of API’s, microformats and standards like SIOC, FOAF, OWL and APML. What fascinates me is that data portability enables people to share and move their data around the web seamlessly. I wrote about some microformats and standards for the semantic web a few days ago, but what about the API’s?

Nowadays, an API most often comes in the form of a REST service that returns JSON or XML, so that both JavaScript and server-side technologies can use them. It makes sense and it promote data portability. Then think about a website. Some parts of it are tagged up with various microformats like XFN, hCard and hCalendar, but what about the rest of the content?

My idea is to render any webpage in either XML or JSON if a certain parameter is added to the URL. It could look like this:

www.example.com/about.aspx?format=xml

Then the about.aspx page would render the output as XML for machines to read and thereby have some sort of read-only API. The standard could be RSS or ATOM or both. Now any machine can read the content of your website easily. Think about how easy it must be for search engines to index an RSS representation of a webpage instead of going through all the excess mark-up and style information.

If any webpage could be served as RSS, ATOM and JSON then the content of that page could be consumed in a multitude of ways. The best thing is that it is relatively easy to implement. Because it is so easy I can’t stop wondering if there is a reason no one has done it before.

The question is who would use this and does it make any sense to begin with?