Wednesday, November 23, 2005

Human and machine facing syndication 

Danny Ayers
If you want to be human-readability-oriented, you can deliver things like Calendars and Contacts over Atom using structured XHTML (i.e. microformats) as the payload. Alternately, it might be more appropriate in a given setup to use a more general data format for the payload such as RDF/XML (which is what DOAP uses). That’s fine: the Atom content element has a type attribute, so the payload format can be easily declared
Aristotle Pagaltzis
Don’t forget that you can always stick some human-readable HTML rendition into atom:summary if your atom:content payload is machine-readable data. I consider this summary/content distinction one of the most crucial and underappreciated value propositions of Atom.

Topics: Atom | MicroFormats | Workflow

links to this post (0) comments

Tuesday, November 22, 2005

XML, Microformats and partial understanding 

Phil Dawes
My recent look at microformats has lead me to think more about the levels of grey between being able to fully interpret (understand) data, and not being able to interpret it at all. Microformats are currently very binary in this regard - either the software knows the microformat and is able to interpret it, or it doesn’t. This is at odds to other data formats, including XML and RDF, which can convey structure even if the software doesn’t fully understand the schema and vocabulary in use.
. . .
1) The software is unable to interpret the meaning of the data, and also unable to interpret the structure
. . .
2) The software is unable to interpret the meaning of the data, but can interpret its structure
. . .
3) The software is able to interpret some of the meaning (knows some of the vocabulary used), but not all of it.
. . .
4) The software is able to interpret the meaning of the data.

Then at this point it’s probably human.

Topics: Meaning | XML | Representation

links to this post (0) comments

Sunday, November 20, 2005

Searching for the Universal Matrix in Metaphysics 

Dr. Harold E. Puthoff
For me this hypothesis emerged when I considered how uneconomical Nature would have to be to posit, on the one hand, an all-pervading energetic field of ki or chi, as in the metaphysics of the martial arts and acupuncture, and, on the other hand, also posit an all-pervasive energetic field of quantum zero=point energy. It appeared to me to be more likely that we are dealing with a single underlying substructure which goes by various names in various cosmologies, depending on whether it is in it pre-manifest random form, or patterned at various hierarchical levels, including the “purely material.”

. . .

If my goal for this research comes to full fruition, what would emerge would be an increased understanding that all of us are immersed, both as living and physical beings, in an overall interpenetrating and interdependent field in ecological balance with the cosmos as a whole, and that even the boundary lines between the physical and “metaphysical” would dissolve into a unitary viewpoint of the universe as a fluid, changing, energetic/information cosmological unity.

Topics: Quantum | Meaning

links to this post (0) comments

Friday, November 18, 2005

XML as Momma Bear Typing 

Chris Suver, Microsoft
What folks have discovered is really the effect of economics on data typing . . . Simple economics controls whether data is typed or untyped . . . The essential point is that the structuring (or typing) of the data is only partially complete. It’s not that the data is intrinsically different, nor that it doesn’t have type (or can’t be typed). It is simply that the effort to fully define the type of the data is not worthwhile. As a result, the author adds only as much type information as necessary to satisfy the immediate needs. Keep in mind that semi-structured data is a task left uncompleted rather than something fundamentally new. It is simply that the information has only part of its type information in place.
. . .
At one extreme is the data that is stored today in SQL databases, such as accounting data (demonstrably high-value data).
. . .
At the other extreme is data on the Web. On the whole, this is very low value . . . Search is heavily used, even though the bulk of its raw data is of no value, and the results are often noisy. Low-value data, but aligned with low cost, makes this a cost/benefit win. As a result, this is one of the most widely used tools on the Web, another clear success.
. . .
I refer to XML typing as a soft system because the typing is applied to the data when the XML is processed. Often the types used by the sender and the receiver are different. Sometimes these are large differences, sometimes small, but any difference means that the transport itself must be softly typed so that it can be easily adapted to the different uses.

So, XML’s authoring cost is low. The author can choose to add as much meta-data (i.e., structure) or as little as appropriate. Here, again, is a case where we have an excellent match between the technology and the user. From this point of view it makes sense that XML has been well-received.

Topics: XML | Representation

links to this post (0) comments

Thursday, November 17, 2005

XML and the reuse of data in multiple processes 

Sean McGrath
As Walter Perry points out regularly on xml-dev, the real value of XML is that it reduces the extent to which I force any one processing model onto others. This enables re-use and innovation in a way that, say, application sharing does not.

The price we pay for this freedom is that designers of XML languages need to find ways to communicate "processing expectations" or "processing models" separately from the data.

It is still the case today that the true meaning of a chunk of markup is dependent on what some application actually *does* with it. It is not in the data itself. For example, I can create rtf, xml, csv files that are completely valid per the markup requirements but "invalid" because they fail to meet particular processing models in rtf/xml/csv-aware applications.

This is one reason why HTML as a Web concept (forget about markup for the moment) and XML as a Web concept are so different. With HTML the processing model was a happy family of 1, namely, "lets get this here content rendered nicely onto the screen". With XML the processing model is an extended family of size...infinity. Who knows what the next guy is going to do with the markup? Who knows what the next processing model will be? Who knows whether or not my segmentation of reality into tags/attributes/content will [meet] the requirements of the next guy.

Topics: XML | Representation | Workflow | Meaning

links to this post (0) comments

Monday, November 07, 2005

Good summary of ASP.NET 2.0 Asych issues 

Mike Woodring
Don't get me wrong. I've really enjoyed delving into the asynchronous component/context architecture supported by CLR 2.0. And I really appreciate what the ASP.NET 2.0 team has done to facilitiate the development of long-running pages. But I've only recently realized that I'd made several oversimplifications in my mind along the way that would have led to subtle issues down the road in development. Hopefully the information here will prevent you from making the same mistakes.

Topics: Asych | ASP.NET

links to this post (0) comments

Sunday, November 06, 2005

Switching to live.com 

Dare Obasanjo
For the past few years my browser home page has alternated between http://news.google.com and http://my.yahoo.com. I like Google News for the variety of news they provide but end up gravitating back to Yahoo! News because I like having my stock quotes, weather reports and favorite news all in a single dashboard. This morning I decided to try out live.com. After laying out my page, I went to microsoftgadgets.com to see what gadgets I could use to 'pimp my home page' and I found a beauty; the Seattle Bridge Traffic Gadget . I've talked about the power of gadgets in the past but this brought home to me how powerful it is to allow people to extend their personalized portal in whatever ways they wish. Below is a screenshot of my home page.

Topics: Web 2.0 | Microsoft

links to this post (0) comments

This Excel Server story is getting almost no attention 

But I think it could develop into something much more important when combined with the Workflow (WF),Atlas and Windows Live stories. However I still think that Microsoft is thinking of there Web 2.0 initiatives as a Trojan Horse for the emergence of the "Smart Client" which may limit there commitment to this stuff. Microsoft Fills In Excel Server Plans, BI Push
Caren told CRN that the upcoming Excel serviceswill enable users to store, manage and view Excel spreadsheets from the server. "You can create it on the desktop, save to the server, put it through workflow for approvals, manage it, and have it expire," he said. That spreadsheet can also be accessed via a browser, expanding viewing capabilites beyond peopel who have Excel on their desktops, he said.

Topics: OLAP | Excel

links to this post (0) comments

This page is powered by Blogger. Isn't yours?