Friday, May 26, 2006
Leigh Doddslinks to this post (0) comments
Last night I took my first look at Google Co-op, in particular the "Subscribed Links" feature which allows users to add services to Google search results. The developer documentation outlines the process by which you can go about creating a service, which is simply a matter of creating a fairly simple XML configuration file. The configuration breaks down into two key chunks: search descriptions ("ResultSpecs") and data ("DataObjects"). DataObjects have an identifier and a type, which is just a literal String. Identifiers must be unique across a type. Types can be "City" or "Radio Station" or something else meaningful to your service. The objects are basically hashes of name/value pairs. This allows you to expose arbitrary data to the Google Co-op servers in a reasonably structured format.
. . .This got me to wondering whether I could feed Google Co-op DataObjects created by a SPARQL query, suitably transformed into the right format. It turns out you can.
Thursday, May 25, 2006
Tim O'Reillylinks to this post (0) comments
The best way to make yourself web 2.0 is actually to expose your data in ways that let other people re-use it.
Wednesday, May 24, 2006
Alexander Chernev Decision Focus and Consumer Choice among Assortmentslinks to this post (0) comments
When choosing among assortments, consumers opt for the variety offered by larger assortments; however, they often are less confident in choices made from larger than from smaller assortments.
. . .The decision-focus theory advanced in this research implies that the observed choice paradox can be attributed to a shift in consumer decision goals: from maximizing decision flexibility, when choosing among assortments, to minimizing decision complexity and reaching a readily justifiable decision when choosing a product from the readily selected assortment. In this context, the data reported in this article demonstrate that by varying the decision focus, it is possible to systematically vary consumers’ choice among assortments.
Friday, May 12, 2006
tagschemalinks to this post (0) comments
The key insight, IMHO, of Data 2.0 is the recognition that tagschemas represent data warehouses of tagged data. Hence analysis of tagschemas must include dimensional modeling. Before we get into that, let's take an informal look at the three main dimensions along which we view folksonomy applications - namely "Users", "Tags" and "Items". Folksonomy applications provide domain specific focus (photos, events, goals,URL's) and user access to data that varies along these three dimensions. This is the kind of thing that dimensional analysis talks about.Three sub-problems
The Fundamental Scalability Problem of Data 2.0, leads to three major questions (sub-problems) that need to be solved in the time of transition from the Old World of Data 1.0 to the Brave New World of Data 2.0.
These three questions are :-
a) If we use relational databases how do we build scalable data architectures for tagschema?
b) At the database level how do we add tagging capability to existing non-tag apps?
c) If we move beyond the relational database for tagapps, then what are the characteristics of the new database architectures?
Things to think about. Meaty problems. Massive opportunities at many levels.
May you live in interesting times too.
Don Dodgelinks to this post (0) comments
Windows Live has brought new life to development teams within Microsoft. Developers are motivated by seeing their innovations out in the marketplace within months, rather than waiting years for the next major release of Office or Windows. This is a dramatic change for Microsoft. Another dramatic change is coming. It doesn't have a name yet, but indications are that Microsoft will build out the biggest hosted computing infrastructure in the world that will enable new hosted SaaS applications for consumers, small business, and even enterprises. This is a multi year, multi billion dollar infrastructure project to support the Windows Live and Office Live innovations that are yet to come.
Thursday, May 11, 2006
Dare Obasanjolinks to this post (0) comments
according to the post MySpace IM is Live by Om Malik, MySpace is adding an integrated IM client to their social networking service. This same tack has also been taken by Yahoo with Yahoo! 360 and by Microsoft with MSN Spaces. The integration is beta in all of the aforementioned services but it seems clear that within a year that most of the major Web players will have a suite of social software services that will include IM, social networking, blogging, photo and media sharing as all part of a single integrated experience. So far, Google are the laggards in this space but I wonder how long it will take them to play catch up.
Wednesday, May 03, 2006
Bill de hÓralinks to this post (0) comments
As a consequence using the current machine or language to implement a more relevant machine or language is a powerful technique. Since new machines can take time to figure out and are hard to estimate they make line of business managers nervous - they smack of over-design, over-engineering, and over budget. As Terence Parr put it "Why program by hand in five days what you can spend five years of your life automating?" But when you have one, it's like going to battle with an AK-47 instead of a musket. The gun analogy is crude, but apt. There is no force multiplier in software development like a new machine. Once you start writing software this way, in terms of better machines, it's very seductive. New machines are silver bullets. Power to get things done is what gets people fired up about Rails as a DSL, or MDA, or Code Generation. I suspect current heady interest in Domain Specific Languages (DSLs) is driven by the "power tool" notion. When Joel Spolsky talks about Leaky Abstractions, he's talking about the situation where an underlying machine, one which you should ideally pay no attention to, pops through your present machine into your coding reality, Lovecraftian style. The word we use to describe this hiding of machines is "transparency", which is one of those technical terms that is correct, but bone-headed. Most people see the word "transparent" as allowing you to see through to something, not hide it. What we really mean here is "opacity", the ability of one machine to hide the fact that it is implemented in terms of another machine. It's when the underlying machine (or machines) seep through that there *is* transparency.That's when the leaky abstraction occurs and problems begin.
. . .Multi-core is the default production configuration now for CPUs I trialed a tiny laptop a few weeks ago and the most surprising thing was that it had a dual core. I'm no metal head, but given that they're turning up in machines optimized for road warriors and office apps, and if people are right about the end of Moore's Law (bye-law?), then the next major machine/language argument could be over concurrency models, with shared memory playing the incumbent, just as static typing and OO does today in language design. Productivity and expressive power might be recast in terms of your ability to work with the concurrent architectures being imposed on you.