<$BlogRSDUrl$>

Thursday, April 29, 2004

Declarative Application Building considered dangerous 

via Tim Bray

Declarative Application Building This is the idea, most publicly on display in the various flavors of BPEL and WS-Choreography, that you don’t need to write a bunch of icky procedural code to deploy your sophisticated Web-Services application. You write a bunch of declarative statements, which is good, only they’re in XML, which is better. Once you’ve written them, your application builds itself!

Sounds great, right? Except for I’ve seen this movie before; twice, in fact. The first time was during the Eighties with the advent of the 4GL’s (“Fourth-Generation Languages”), whose value proposition read more or less exactly the same: build your application declaratively and let us take care of the details. Then again in the Nineties, there was a (now mostly forgotten) class of application called “Workflow”; the idea was you diagrammed out your business processes as a sequence of events. You could put in conditional branches and loopback conditions, and there were these way-cool graphic front-ends so you could actually see the processes right there on the screen. Then the workflow applications would route your work among the fallible human cogs who actually did it, only there wouldn’t be expensive mistakes any more.

There were some problems, though. It turned out with 4GL’s, you could build 90% of your application in no time at all, but there always seemed to be one or two little pieces that were necessary but required some tricky procedural code and took endless effort to get done, and sometimes you couldn’t.

And as regards the workflow products, I never once heard of a single successful deployment of a complex automated-workflow scenario on any scale. It works OK for simple workflows, like at an HMO: so many thousand transactions come in the door every day, most get approved and paid, a smaller proportion get kicked up to second level, then human judgment takes over. But complex workflows resist automation, mightily.

Once again, I’m not saying it can’t be done, or that it’s not a good idea. I’m just saying that it’s not a safe bet today for the prudent businessperson.


(0) comments

Tuesday, April 27, 2004

Combining XML Documents with XInclude 

Good article on Xinclude (including Xpointer support)
The spirit of modular programming dictates that you break your tasks up into small manageable chunks. This dictum is as also applicable when producing XML documents. It often makes sense to build a large document from several smaller ones. Some situations that call for this chunking include composing a single book out of multiple chapters, building a Web page out of separately maintained documents, or adding a standard footer such as a corporate disclaimer into document. There are many ways to approach solving the problem of how to construct a single XML document from multiple documents. This article focuses on one, which looks destined to be the universal general-purpose mechanism for facilitating modularity in XML: XML Inclusions (XInclude).

(0) comments

Monday, April 26, 2004

How to write an aggregator 

according to Gareth Simpson
This is a list of resources which are useful when building an RSS/Atom aggregator. I found them useful when building FeedThing, maybe other people will too.

(0) comments

BPEL evolvability 

BPEL Murmurs from around the Web
claiming that BPEL is perfect and complete would be foolish. I think that a more interesting question is: Can BPEL evolve to address some of the existing missing features or is there something flawed in the BPEL interaction, scoping, control flow and exception/eventing model that would prevent it in the future to fill the existing gaps? The work that Collaxa has done in the user task management area is one evidence that user tasks and portal integration can very easily be added to BPEL. More specifically it showcases the composability of the web services/SOA approach compared to the monolith approach showcased by more traditional workflow languages. On the support for unstructured flow, the BPEL eventHandler and the BPEL pick activity actually represent a great step forward, so I think that this shortcoming is more a lack of knowlege around BPEL than a real limitation.
See also Involving Humans
So, by second-class citizen I mean a situation when people cannot directly influence a process. In other words, a process designer assumes some humans involvement in certain steps of the process. E.g. I'm supposed to process purchase orders, so the designer defines POService that participates in the process and this service somehow involves me (via some sort of UI). The same applies for Task Manager. These predefined 'entry points' are ok in most cases. However, it is difficult to perform what I call improvisation – something gets wrong so that automated participants do not know what to do. If these participants are not flexible enough (i.e. they don't have sufficient support for humans), the process ends up in some failure state.

(0) comments

Friday, April 23, 2004

Microsoft focuses on Domain Specific Languages for MDA 

UML and DSLs Again

For these reasons, we elected to define the Service Connectivity DSL using a purpose-built metamodel, itself defined as a member of a family of related metamodels. This gives us a natural and precise foundation for service connectivity, and gives us a high-fidelity mapping to the underlying implementation artifacts which includes, of course, some amount of code. This is the same conclusion we have reached on other focused development tasks, and led to similar DSLs for the Class Designer and the Logical Data Center Designer described in other postings. Support for extensible DSLs, built as a family, with well-defined mappings between them and other development artifacts has thus become the basis of Microsoft’s model driven development strategy.

 

To summarize, we’d recommend using UML and UML-based tools for

  • Sketching,
  • White boarding,
  • Napkins,
  • Documentation,
  • Conceptual drawings that do not directly relate to code.

We’d recommend precisely defined DSLs and DSL-based tools for

  • Precise abstractions from which code is generated
  • Precise abstractions that map to variabilities in frameworks and components
  • Precise mappings between DSLs
  • Conceptual drawings that have precisely specifiable mappings to other DSLs or to code artifacts.

We’d recommend neither of the above for visual programming of detailed program logic (or at least for many years yet), despite Grady Booch’s optimism about executable UML expressed in an interview for the British Computer Society Bulletin that can be found here.

 

posted on Friday, April 16, 2004 3:44 PM


(0) comments
The Support Economy: Why Corporations Are Failing Individuals and the Next Episode of Capitalism
Reviewer: sherry wilkson from Austin Texas

It is not often that a book like this appears. It is rich in history and analysis. It celebrates each of us as individuals and shows us that our place in history is unique-- we are a new market. It explains why customer centric organisations do not work- which is important to me as an HR person. It lays the foundation for a new way of thinking about business and it purpose. It does not talk down to the reader. It does not try to say everything is simple and can be reduced to boxes, arrows and change management. It sets out an adgenda for a coversation- so did Cluetrain but this is the next step- and invites comments and discussion. It is optmistic but grounded in a real understanding of what technology can and cannot do ( this is from Zuboff's Smart Machine.) This is a book which is a must read .Some will hate it and dismiss it but that is because they are trapped in a reality of today and cannot open themselves to tomorrow


(0) comments

Thursday, April 22, 2004

Metaphors of software development 

via Bill de hÓra
Nonetheless, to me the city is not a good metaphor for software, and never has been. Writing software has not felt like designing and building a house. Using software is not like using a house - a program is not a habitat. Changing software is not like demolition (if it is, you're in trouble). I've built houses, I've built SOAs, and the processes have nothing much to do with each other as far as I can see. The problems in software are down to complexity, expediency in engineeering and duplication much more than insufficient architecture. We can call making and thinking about software systems building and architecture if we want, but that's as far as it goes. I like organic and growth metaphors over physical construction or industrial ones - they ring true to me.

And yes, my business title is Technical Architect. But lets be careful about aggrandizing ourselves with direct comparisons to Architects in other disciplines. We are not architects or engineers like those folks are - they have a richer deeper background to draw from. The likes of Marcus Vitruvius, Bruneschelli or Thomas Slade don't exist in software. But - it doesn't matter what your process, technical and architectural leanings are. At some point, if you want a system that will do useful work and grow with your needs, you will need to find competent people to write the code.


(0) comments

Friday, April 16, 2004

Are boxes and arrows enough 

Radovan Janecek
For Microsoft, BPEL is not an execution language. Each platform will have its own execution engine leveraging the platform strengths. My take: so called 'platform strengths' are sometimes (often?) beyond BPEL expressive power. Try to use business rules in BizTalk and export them as BPEL document, for example... I'm going to provide more detailed examples of business processes (and their states) here as I was asked by Edwin. In this context, this presentation led me to more general question: To what extent we can effectively use 'boxes and arrows' programming style effectively?

(0) comments

Thursday, April 15, 2004

User Interaction and semi-structured workflows 

Tutorial 6: Working with the TaskManager Service
BPEL is a language for composing multiple services into an end-to-end business process. People and manual tasks are often an integral part of such business processes (particularly for exception handling or workflow/approval-related tasks). In this document, you will learn how to use the Collaxa TaskManager service to model user interactions within a BPEL business process.

(0) comments

RSS Tipping Point  

Steve Gilmore
Those who read between the lines of these conversations can intuit that RSS is rapidly approaching a critical mass in the enterprise. Notification, subscription, presence, and awareness services are congealing into a real-time events-based information routing fabric that outpaces other existing legacy channels. Such channels include email, developer conferences, print publications, and broadcast media.
See also
Context is NetWeaver's secret sauce—the master data and the analytics around it. It answers questions: With whom have we met at the customer's company? What have they bought already; how quickly do they act? NetWeaver's event processor acts on that context, converting time-sensitive alerts to instant messages and sending them to interested parties—for example, to a product manager for a product mentioned in the event.

"We take a lot of these resolutions and capture them, not for the sake of recording the history but to find repetitive patterns," Agassi said. These ad hoc scenarios are in turn captured as guided procedures and offered as resolution scenarios when the event next occurs.

Weblog Wishlist Manifesto
In a recent thread on the Bloggercon weblog, Dave Winer posed a question: "Question: What's next in writing tools for weblogs?". Well over a hundred responses came in. After printing out and reading through the 40+ pages of responses, a few major themes began to emerge. Bloggers wanted to create more easily, connect with others fluidly, create and manage communities around their weblog and throughout the blogosphere, and conserve their content. Here is what they said.

(0) comments

Wednesday, April 14, 2004

Bouquets, Brickbats for Microsoft's 'Channel 9' 

Ryan Naraine

Microsoft's (Quote, Chart) attempts to put a human face on its corporate image took a small step forward with the unveiling of Channel 9, a Web site promoting dialogue between software evangelists and users. But problems with streaming video formats and Web browser incompatibility have left many users unable to access the site's hybrid features.

Channel 9, created by a team of Microsoft evangelists to encourage dialogue between Microsoft employees, software users and third-party developers, includes video interviews, blog entries, RSS (define) feeds, wikis and discussion forums.

On Tuesday when the site went live, it logged more than 10,000 concurrent visitors at one time, according to Microsoft Longhorn evangelist Robert Scoble, one of the primary movers behind the creation of Channel 9.

But many visitors flocking to the site soon found that proprietary Microsoft technologies shut them out. For instance, users of Mozilla's Firefox Web browser encountered problems rendering the pages and viewing the video feeds. Channel 9, which makes heavy use of video interviews with Microsoft employees, delivers the streams exclusively in the Windows Media 9 format, effectively ignoring a section of the audience using Linux, Unix and competing operating systems.


(0) comments

Tuesday, April 13, 2004

Model Drive Architecture, Domain Specific Languages and Code Generation 

More MDA Critique
Comment from Neil at Mar 28, 2004 10:17 PM:

Amen, hallelujah, and sing it to the rafters.

The problem with MDA, and the Shlaer-Mellor method that preceded it in my experience, is the way it is taught. I've made semi-religious converts out of people who swore blind that generating code from models would never work in practice. I do it by getting them to build their own code generator with Access and VB (don't laugh). They build a little database to hold their meta-information, e.g. for Ethernet message structures, then write some VB to walk the database and generate their code. They get a real Eureka thrill out of it. After that, you've got their attention and can start to introduce them to loftier concepts.

A lot of the tools on the market, at least the ones that I've worked with, are difficult for people to accept because they hide what's going on under the covers. Users cannot develop faith in the tools and many of them turn against them.

A good educational programme makes all the difference.

Comment from Eric Newcomer at Mar 28, 2004 10:33 PM:

Hi -

I wish everytime I criticize MDA people wouldn't automatically assume I don't understand it.

The point I was trying to make is that MDA is not going to work out as a lot of people seem to hope that it will, as the software industry's next great abstraction.

I agree with a lot of the points you make about the practical use of UML and MDA and if the proponents of those technologies were to say similar things I think it would be very helpful.

Instead, we get a lot of hype about how MDA is going to become the new, single mechanism for application development.

I agree code generation is good and helpful. I agree diagrams and models have their place. But I basically think the grand vision of UML and MDA is never going to be achieved and people should stop talking as if it were.

Comment from Stefan Tilkov at Mar 28, 2004 10:34 PM:

Neil, I totally agree, as is to be expected. And I won't laugh at you for the Access/VB idea - in fact, I often use Excel as an example where code generation can turn information from something and end-user can understand into the metadata for a "real" application.

Comment from Stefan Tilkov at Mar 28, 2004 10:44 PM:

Eric, I think that if we essentially have no real difference in our opinions. But I am not sure what the great vision behind MDA you allude to is supposed to be, and who is actually hyping it. I actually *am* a proponent of MDA. Who are those proponents who claim that MDA is going to be the silver bullet?

Somehow, every week somebody else seems to be bashing MDA, and - more or less unintentionally - makes people turn away not only from that great, unrealistic vision, but also from every sensible use of metainformation and code-generation.

Comment from Neil Earnshaw at Mar 29, 2004 3:18 AM:

I know that MDA is not applicable to all situations. I think that it does become more applicable as you scale your systems up. At the larger end of the system scale, it will easily repay the investment you have to make in training and developing code generators.

A major trap that many people fall into is expecting one code generator from their tool vendor to produce all their code. They always hit performance problems. (I'm coming from the defence sector here; I don't expect IS code generators to have quite the same performance problems as their architectures are more uniform.) If you model your system as a set of domains, then you can get great performance results by developing domain-specific code generators. A code generator for Ethernet messaging, another for 1553, one for your GUI domain, and so on. These little domain-specific code generators are really easy to develop and are especially rewarding when the handcrafted alternative requires weeks of drudgery to implement. And they can be easily linked in with your must-be-done-by-hand code.

Think of MDA today as being in an analogous situation to the first 3GL compilers. All those assembler boffins walking around saying, "These compilers will never match my hand written code." Given time, the compiler writers came up with the goods. And yes, we still have applications today that require handcrafted assembly code. That need will probably never go away, but the balance has shifted from writing assembler to writing 3GL. I see the same thing happening with MDA. Just give it time.


(0) comments

Monday, April 12, 2004

Defining REST, Applications and Workflow 

“The central feature that distinguishes the REST architectural style from other network-based styles is its emphasis on a uniform interface between components”

That uniform interface is defined by four interface constraints

1. Identification of resources
2. [direct] manipulation of resources through representations
3. self-descriptive messages
4. hypermedia as the engine of application state

These constrains enforce a specific kind of “uniform interface” to guide the behavior of application components. The motivation for using these constraints is to simplify system architecture and improve the visibility of component interactions.

“The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs.” (from the infamous Roy Fielding Dissertation)

This leads to systems that are architected with REST applications to cross organizational boundaries, where specific APIs cannot be controlled by a single authority, and more efficient object-oriented architectures to support processing within a specific organizational and workflow processing context. The optimal split between REST and object oriented architectures can be a difficult decision to predict. For workflow applications, one of the most interesting constraints to consider is: “hypermedia as the engine of application state”.

REST designs emphasize that, by clicking on links you should be able to access an application’s representation without having to “remember the history of choices" or "hidden state" in order to know how to interpret that representation.

The easiest way to do this is to have a single URL that returns the entire application representation.

If this is too large, you can divide the representation into smaller pieces that are connected by URLs. A chapter index provides a metaphor for a set of links that represent the entire document and provide access to each piece as a navigational choice. You can also provide partially expanded views, where one or more of the chapters of the index are expanded into content.

If it isn't possible to have a link to every part of the representation in a single step, you should at least make sure that the representation is fully connected, allowing access to any part of the representation through multi-step link paths. One simple example is a document where each page has a previous and next page link. If the document is transformable, however, things can get a little more complicated. If you change a piece of a document, you can disrupt links that point to it. In the worst case, this fragmentation will inadvertently divide the representation into pieces that are no longer connected.

It is also possible to connect the URLs of an application through some kind of URL based query facility. This provides a novel definition of what is an application. A REST application is defined as a collection of mutually connected URLs. As new URLs are added to the representation, if those URLs provide a path back to all the other URLs in the application (possibly over many steps or through a URL based query function), then that newly added URL and its supporting process are added to the definition of the application.

This means that the URLs returned from a Google search, are not part of the Google REST application, unless they have a link that points back to the Google query screen. A reporting application that has a home page which list existing reports, and allows you to search and create new reports, would only included the generated reports as part of its REST application, if those reports had a link back to its application home page. Otherwise, it would just be an application that listed, created and searched external resources.

In order to keep application boundaries clear, a Web resource should not make changes to Web resources that are not connected to it. Any data that is modified behind the veil of the URLs supporting process however (like object oriented APIs and database tables), would generally be considered part of the process application even if they are not accessible from a URL, but they would not be part of the “REST application”.

A similar argument applies to REST Services Oriented applications. A GET on a REST Web Service should return a view of the piece of the application state that is being managing, and support POST, PUT, or DELETE to modify its state. Any newly created resources should link back to the generating application process. Deleting resources should not split the applications network of links into disconnected islands.

REST Workflow applications, would generally have a list and query resource for managing workflow instances. Each workflow instance would then provide a URL backed resource for managing the individual workflow instance. All processes that modify the state of a workflow instance, should annotate that portion of the workflow state with their process URL. A GET on that process URL should return a view of that portion of the representation that is being managed along with links (or views) to the rest of the application state. Whether there are additional application constraints, consistent with BPEL or other workflow languages, is not really relevant to REST architecture.


(0) comments

Thursday, April 08, 2004

Robert O'Brien; an economic metaphor for constrained interfaces 

Via Mark Baker
The expressive power of an information system is proportional not to the number of objects that get implemented for it, but instead is proportional to the number of possible effective interactions between objects in it. (Reiser's Law Of Information Economics)

This is similar to Adam Smith's observation that the wealth of nations is determined not by the number of their inhabitants, but by how well connected they are to each other. He traced the development of civilization throughout history, and found a consistent correlation between connectivity via roads and waterways, and wealth. He also found a correlation between specialization and wealth, and suggested that greater trade connectivity makes greater specialization economically viable.

You can think of namespaces as forming the roads and waterways that connect the components of an operating system. The cost of these connecting namespaces is influenced by the number of interfaces that they must know how to connect to. That cost is, if they are not clever to avoid it, N times N, where N is the number of interfaces, since they must write code that knows how to connect every kind to every kind.

One very important way to reduce the cost of fully connective namespaces is to teach all the objects how to use the same interface, so that the namespace can connect them without adding any code to the namespace. Very commonly, objects with different interfaces are segregated into different namespaces.

If you have two namespaces, one with N objects, and another with M objects, the expressive power of the objects they connect is proportional to (N times N) plus (M times M), which is less than (N plus M) times (N plus M). Try it on a calculator for some arbitrary N and M. Usually the cost of inventing the namespaces is much less than the cost of the users creating all the objects. This is what makes namespaces so exciting to work with: you can have an enormous impact on the productivity of the whole system just by being a bit fanatical in insisting on simplicity and consistency in a few areas.


(0) comments

Wednesday, April 07, 2004

The religion of programming? 

Design Patterns

(0) comments

Monday, April 05, 2004

The difference between men and women 

New York Times Pfizer Gives Up Testing Viagra on Women
Women, the maker of Viagra has found, are a lot more complicated than men.

After eight years of work and tests involving 3,000 women, Pfizer Inc. announced yesterday that it was abandoning its effort to prove that the impotence drug Viagra improves sexual function in women. The problem, Pfizer researchers found, is that men and women have a fundamentally different relationship between arousal and desire.


(0) comments

Ant Lessons Learned 

x180: Ant and XML Build Files

My friends Mike Clark and Glenn Vanderburg asked to me to write up my current thoughts on using XML as the build file format for Ant based on watching it being used by Java developers over the years. It's very educational, as well as very sobering, to have software ideas that you put into form used by tens of thousands of programmers every day. Let me just say that you learn a lot from the process.

. . .

In retrospect, and many years later, XML probably wasn't as good a choice as it seemed at the time. I have now seen build files that are hundreds, and even thousands, of lines long and, at those sizes, it turns out that XML isn't quite as friendly a format to edit as I had hoped for.

. . .

Now, I never intended for the file format to become a scripting language—after all, my original view of Ant was that there was a declaration of some properties that described the project and that the tasks written in Java performed all the logic. The current maintainers of Ant generally share the same feelings. But when I fused XML and task reflection in Ant, I put together something that is 70-80% of a scripting environment. I just didn't recognize it at the time. To deny that people will use it as a scripting language is equivalent to asking them to pretend that sugar isn't sweet.

If I knew then what I know now, I would have tried using a real scripting language, such as JavaScript via the Rhino component or Python via JPython, with bindings to Java objects which implemented the functionality expressed in todays tasks. Then, there would be a first class way to express logic and we wouldn't be stuck with XML as a format that is too bulky for the way that people really want to use the tool.

Or maybe I should have just written a simple tree based text format that captured just what was needed to express a project and no more and which would avoid the temptation for people to want to build a Turing complete scripting environment out of Ant build files.

Both of these approaches would have meant more work for me at the time, but the result might have been better for the tens of thousands of people who use and edit Ant build files every day.

Hindsight is always 20/20.


(0) comments

Sunday, April 04, 2004

Microsoft: The new plan 

CNET Martin LaMonica
In a keynote speech that opened last week's company-sponsored developer conference, Gates said Longhorn will make it easy to create applications that require more sophisticated graphics rendering, greater processing power and huge hard drives in future PCs. Windows is being overhauled to process data in the Extensible Markup Language (XML) and to use HTML to publish information, he said. "The personal computer in less than three years will be a pretty phenomenal device," Gates said. "Exploiting the client, delivering data in the form of XML to the client and then having local rich rendering while still being able to have mapping to HTML for reach--that's something we're making very simple."

(0) comments

This page is powered by Blogger. Isn't yours?