<$BlogRSDUrl$>

Friday, January 27, 2006

How to build a distributed application: Introduction 

John Cavnar-Johnson
Boyd's fundamental thesis was that the quicker you iterate through the loop, the more effective you become. The most common misimpression is that his thesis is simply about speed. To the contrary, the real key to understanding the loop is that quicker iterations come primarily by combining an initially accurate orientation, which allows you to favor implicit guidance and control (the thick green arrows in the diagram above) over explicit decision-making, with the flexibility to accommodate changes in your orientation based on observation. Automated unit testing makes a good example from the world of software development. Many developers believe that writing unit test will inevitably slow them down. If you take a static view of the process, this seems obvious: production code + unit tests > production code But once you experience developing with effective unit testing, you realize that you can actually write better code faster. In OODA loop terms, you've made the creation of unit tests part of your Orientation ("requirements") phase. Each time you make changes to your code, you run the whole set of unit tests (this is implicit guidance and control) and you can observe whether the changes have adversely affected your conformance to spec (red light/green light). You gain the ability to tighten your coding loop which, in turn, leads to an overall decrease in the amount of time it takes to deliver the right code.

. . .

This post is designed to lay the groundwork for a series of posts that will translate this somewhat esoteric theoretical framework into practical advice.

Topics: DistributedProcessing | Modelling | Representation


(0) comments

ASP 2.0 Tutorials 

ScottGu's Blog

Data Tutorial #2: Building our Master Page and Site Navigation Structure

This past weekend I posted a step-by-step tutorial on how to build a strongly-typed DAL (data access layer) using Visual Web Developer (which is free) and ASP.NET 2.0.

 

My plan over the next few weeks is to post many follow-up samples that show various ways to use this DAL to build common data scenarios using ASP.NET 2.0 (master details, filtering, sorting, paging, 2-way data-binding, editing, insertion, deletion, hierarchical data browsing, hierarchical drill-down, optimistic concurrency, and more).

Topics: ASP.NET | Tutorial


(0) comments

VS Code Snippets 

Scott Guthrie
What makes code snippets really cool is that you can easily create and define your own.  So over-time you can build up your own library of common patterns that you use, and add efficiency to your daily tasks (for example: do you always write data code in a certain way, or insert objects into the ASP.NET cache using a specific pattern, or validate security, etc.)  These snippets are stored within XML files on disk, and can be easily shared across a dev team (or friends).

Here are a couple of articles and links I found on the web that go into how to build and use your own snippets:

Hope this helps,

Topics: Programming | Techniques


(0) comments

Wednesday, January 25, 2006

The DRY principle 

Dave Thomas
The DRY principle says there must be a single authoritative representation of a piece of knowledge. It's worded that way for a reason: it says nothing about having lots of non-authoritative copies. So why isn't this duplication? Because (assuming an active code generator) these copies are not part of the code base. They are simply artifacts, in the same way that (say) object or class files are artifacts of the build process. The input to the code generator, which obviously is a part of the checked-in code, contains no duplication. The transient artifacts that are generated may.

Topics: CodeGeneration | Representation


(0) comments

Friday, January 20, 2006

Entropy and metadata aggregation 

Stefano Mazzocchi
One thing we figured out a while ago is that merging two (or more) datasets with high quality metadata results in a new dataset with much lower quality metadata.

. . .

Independently, the two datasets are very coherent and a lot of time,money, energy was spent in them. Together and even assuming the ontology/schema crosswalks where done so that owners of the two datasets would agree (which is not a given at all, but let's assume that for now), they look and feel like a total mess together (especially when browing them with a faceted browser like Longwell)

The standard solution in the library/museum world is to map against a higher order taxonomy, something that brings order to the mix. But either no such a thing exists for that particular metadata field, or there is but it was incredibly expensive to make and to maintain and have a tendency, almost by definition, to become very hard to displace once you stick to one of them.

. . .

I find this discovery a little ironic: the semantic web, by adding more structure to the model but increasing the diversity at the ontological space might become even *more* messy than the current web, not less.

. . .

So, are we doomed to turn the web into a babel of languages? and are we doomed to dilute the quality of pure data islands by simply mixing them together?

Luckily, no, not really.

What's missing is the feedback loop: a way for people to inject information back into the system that keeps the system stable. Mixing high quality metadata results in lower quality metadata, but the individual qualities are not lost, are just diluted. There needs to be additional information/energy injected in the system for the quality to return to its previous level (or higher!). This energy can be the one already condensed in the efforts made to create controlled vocabularies and mapping services, or can be distributed on a bunch of people united by common goals/interests and social practices that keep the system stable, trustful and socio-economically feasible.

Topics: Meaning | Entropy | Metadata | RDF


(0) comments

This page is powered by Blogger. Isn't yours?