Monday, March 19, 2007
The next big thing in user interface design will be auto-complete pidgen languages linked to Wiki style application interfaces for Data 2.0 data surfing. I've got a few ideas that I will hopefully have time to prototype soon. These ideas link to my very first blog posts. The real work of metadata driven architecture Using Wiki Pidgin Languages to Dynamically Create Links for a Read/Write Web XML backed Pidgin Guy Topics: Data2.0 | Web2.0 | Representationlinks to this post (0) comments
Tuesday, March 13, 2007
Jon Udell's Interviews with Kingsley Idehenlinks to this post (0) comments
Kingsley: Well, I don't know, you tell me. The semantic web gets kind of confusing because there's a lot of research content and some of the guys are real deep graph model geeks that put out a lot of content. But a lot of people don't look at it because it's already sort-of complex and people switch off without really getting to the best of what's really going on. A graph model, ideally, will allow you to explore almost all the comprehensible dimensions of the nodes in that network. So you can traverse that network in a myriad of different ways and it will give you much more flexibility than if you're confined to a tree, in effect, the difference between XQuery and SPARQL. I always see the difference between these two things as this. If you visualize nodes on a network, SPARQL is going to get you to the right node. Your journey to what you want is facilitated by SPARQL, and then XQuery can then take you deeper into this one node, which has specific data that the graph traversal is taking you to. You see what I mean here? Jon: Since I haven't studied SPARQL at all yet, if you could make that a little more concrete with some sort of an example it would help. Kingsley: Let's take a microformat as an example. HCard, or an hCalendar, is a well-formed format. In a sense, it's XML. You can locate the hCard in question, so if you had a collection of individuals who had full files on the network in the repository, it could be a graph of a social network or a group of people. Now, through that graph you could ultimately locate common interests. And eventually you may want to set up calendars but if the format of the calendar itself is well formed, with XQuery you can search a location, with XPath it's even more specific. Here you simply want to get to a node in the content and to get a value. Because the content is well formed you can traverse within the content, but XQuery doesn't help you find that content as effectively because in effect XQuery is really all about a hierarchical model.
. . .Jon: You mentioned Spotlight earlier, and a lot of what you were just describing, in terms of having this very heterogeneous data store that can hold lots of different things, have a coherent way of querying across them and managing metadata in various flavors is, obviously, also, extremely interesting on the desktop. Spotlight is a desktop project, WinFS is going to be a desktop product. Although we have tended to think of this class of technology that you have been developing as server technology, we have desktop machines that can run it perfectly capably. That seems like a pretty interesting scenario. Kingsley: Absolutely, I mean, Virtuoso, runs from my desktop, see, I have a very federated view of networking. If you look at Web2.0 today, we take the view that everything is going to be hosted somewhere. In a sense, we are looking at a centralized model with these new terminals that we call PCs with browsers talking to a centralized server. So we are networking, but networking is about many points that are able to communicate using different protocols. Client/server, peer-to-peer, you name it. My view is that what people really want to be able to do is, in some cases, run software on a desktop. Virtuoso is so small that of course you can run it on a desktop. Well, you know that anyway yourself, because you ran UserLand on the desktop, and in the beginning that was one of the real keys to its success.
Monday, March 12, 2007
Tim O'Reillylinks to this post (0) comments
What's so clever is that by articulating the types as a separate structure from the data, and having instances inherit that structure when they are created, users don't think they are providing metadata -- they think they are just providing data. Because anyone creating a new instance is prompted to fill out the data in a structured way, that it doesn't seem like an extra task, but rather that the software is being helpful.
. . .The user is given an opportunity to create a very structured entry that doesn't feel like a chore but just the natural way to perform the task.
Sunday, March 11, 2007
David H. Wolpertlinks to this post (0) comments
One ramification of this is the “No Free Lunch” (NFL) theorems, which state that any two algorithms are equivalent when their performance is averaged across all possible problems. This highlights the need for exploiting problem-specific knowledge to achieve better than random performance.
. . .In contrast to the traditional optimization case where the NFL results hold, we show that in self-play there are free lunches: in coevolution some algorithms have better performance than other algorithms, averaged across all possible problems. However in the typical coevolutionary scenarios encountered in biology, where there is no champion, the NFL theorems still hold.
Harold Morowitzlinks to this post (0) comments
Life is universally understood to require a source of free energy and mechanisms with which to harness it. Remarkably, the converse may also be true: the continuous generation of sources of free energy by abiotic processes may have forced life into existence as a means to alleviate the buildup of free energy stresses.
. . .A deterministic emergence of life would reflect an essential continuity between physics, chemistry, and biology. It would show that a part of the order we recognize as living is thermodynamic order inherent in the geosphere, and that some aspects of Darwinian selection are expressions of the likely simpler statistical mechanics of physical and chemical self-organization.
Sunday, March 04, 2007
Sir Timothy Berners-Leelinks to this post (0) comments
First, the Web will get better and better at helping us to manage, integrate, and analyze data. Today, the Web is quite effective at helping us to publish and discover documents, but the individual information elements within those documents (whether it be the date of any event, the price of a item on a catalog page, or a mathematical formula) cannot be handled directly as data. Today you can see the data with your browser, but can't get other computer programs to manipulate or analyze it without going through a lot of manual effort yourself. As this problem is solved, we can expect that Web as a whole to look more like a large database or spreadsheet, rather than just a set of linked documents.