Friday, February 27, 2004
Overview of a Single Sign-on Solution
Classes and Pages
AppLauncher Web Application
Retrieving Tokens in the Web Applications
Enhancing Your Single Sign-on System
Installing the Samples
Thursday, February 26, 2004
The State interface is also very minimal. I'm erring on the side of caution for the time being, and allowing other interfaces to extend the basic State requirements. Implementations of those objects can be recast inside of the different Action and Workflow implementations without cluttering up the method signatures with specific dependencies. A toMap() method will probably be in order eventually. Both of State's methods return java.lang.Object, which likewise must be recast. This flexibility comes with an accompanying degree of complexity, which we'll explore more later. The Action interface is simple. Actions are given a reference to the State and they don't return anything. Their role is to alter the State in some way, or to trigger an external process as a result of the state. The other components in the framework make no assumptions about what actions actually do. A Workflow component can check the state before and after an action has fired, and use that information to figure out what to do next, but has no other way of knowing what goes on therein. Workflow implementations accept a reference to the State object but, unlike Action objects, they also return a State reference. This is important, because it might not be the same State as the one they received. Right now, Workflow components examine the application state, decide what to do next, maybe fire off some actions, forward control to another Workflow component, and once they're done doing their thing, they give control back to whatever object called them. All they do is examine the state and decide whether or not to execute actions. By themselves, action objects aren't capable of deciding the order in which they execute. They can set a flag in the state, thereby suggesting the order, but ultimately, the workflow components make that decision. The best part about Workflow components is that the brunt of their decision-making functionality can be replaced with a JSR-94-compatible rules engine. In the future, the role of Shocks' Workflow components will be to delegate decisions to that JSR-94 rules engine by way of JMX. The entire Workflow engine will therefore be completely modular, extensible, and manageable.see also Rethinking Model-View-Controller
Combining the structured sequence of a wizard with the navigational flexibility of a hub, guides are another type of task flow commonly used in web applications. Unlike wizards, guides do not assume any sort of strict interdependence between steps. As a result, guides can incorporate the navigational flexibility needed for users to access, process, and edit the forms in any order. And unlike hubs, guides also provide the requisite structure and sequencing needed to ensure users can successfully create and manage lengthy or complex transactions.
Wednesday, February 25, 2004
Macroevolution Duplicating a Branch of the Differentiation Tree could involve Cut, Copy & Paste at the DNA Level Coevolution of the genes and DNA-binding proteins cleaning up the crosstalk - An example of “post-adaptive adjustment Progressive Evolution
The mystery vanishes if the probability of growth of the differentiation tree exceeds that of pruning
Like frost on a window pane, and as inevitable
Monday, February 23, 2004
Yet to date, there is little recognition of the following commonsense point: If indeed the programs are so complex, then it is likely that they, too, will be potentially subject to hundreds of thousands, perhaps millions of egregious mistakes of adaptation. Here I am not only talking about “bugs” --- failures which stop a program from running altogether. I am talking about mistakes of adaptation, ways in which the program fails to do what it is supposed to do, fails to meet the needs of the people who use it, or is more awkward, more annoying, less useful, than it is supposed to be. If the analysis given in this chapter is correct, then it is fair to say that truly successful programs can only be generated; and that the way forward in the next decades, towards programs with highly adapted human performance, will be through programs which are generated through unfolding, in some fashion comparable to what I have described for buildings... Of course I am in favor of small steps, of adaptation through trial and error, and of what we may call evolutionary adaptation. But this is not the central point at all. After listening to all these computer scientists’ comments, and taking them to heart, I realized that I had failed, in my lecture, to emphasis the real essence of all generated structures. The real essence lies in the structure-preserving transformations which move the structure forward through time, and which are primarily responsible for the success of the generating process. The needed transformations are not merely trial-and-error steps, or some neat way of continually checking and making things better... To assume that the point of generated structures is merely slow, step-by-step evolutionary adaptation, is to make the same mistake that early adherents of Darwinism made in biology --- to assume that small steps alone, modification coupled with selective pressure, would be sufficient to get a genotype to a new state, hence to create entirely new organisms ... and so on. This does not work, and it is now widely recognized not to work, because it lays too little emphasis on the (hitherto) unknown transformations which actually do the hard work of moving the evolving organism through stages that lead to its coherence and its geometric beauty in the emerging genotype.
So, the pendulum has begun to swing back towards the rich client model. But what about application deployment and update? The TCO of an application is still as important as it ever was, probably more so. Do you sacrifice manageability for usability, or vice versa? Happily, you don't need to sacrifice either. Key capabilities now exist which mean that we can take full advantage of the rich client model, providing the user with an excellent user experience, while at the same time reaping the benefits of centralized deployment and update. In short, this new generation of client applications, the so-called "smart" clients, provides the best of both worlds and adds the intelligence to manage data and connectivity to produce an extremely compelling user experience.
So I think about this a lot. And I do have some notes about XAML in my blog. But the most … I've punted on it. There are some reasons the browser became successful and we all started doing this… I don't know if you know this but I built the browser for Microsoft at one point. I built IE [Internet Explorer] 4 and built the DHTML and built the team that built it. And when we were doing this we didn't fully understand these points. And one of the points was people use the browser as much because it was easy to use as almost anything else. In other words I'd talk to customers and say we can add to the browser all these rich gestures. We can add expanding outlines and collapsing and right click and drag over and all that—all the stuff you're used to in a GUI. And without exception the customer would tell me please don't do that, because right now anyone can use the sites we deploy and so our training and support costs are minimal because it's so self-evident what to do. And if you turn it into GUI we know what happens, the training and support costs become huge. So one of the big values of the browser is its limits. . . . So I think what we want isn't a thick client, and I wasn't leading that way. But I think there will be some cases where there's a thick client. I think in general we still want to say an app is just something you point to with a URL. And you don't have to deploy it. And you can throw it out of memory at any time, and there's no code and no libraries and no interdependencies. All the great things about installation-free software that the browser gave us. And the other thing big thing of course is that if you make a change everybody sees the change. So how do I get my cake and eat it too? How can you have a model where you have a thin client just like we have today and yet it works well against the data model. And I think what you do is you have two things that you point at on the web. One thing you point at is the information and one you point at as the way you present it and interact with it. And then the browser is smarter and it knows how to cache. It already knows how to cache your pages and now it knows how to cache your information. And it knows how to do offline synch so you actually go offline and come back online and can synchronize. But other than that it's still a browser. You have to know one thing once and that's your browser. Then you just point to the URL and you run them in the way that you do in the browser mall as opposed to .EXEs. Now, Longhorn I think is conflicted about this. Bill [Gates, Microsoft chairman] has always wanted a thick client. Eric Rudder, in part is where he is because he promised Bill a thick client—and partly because he's brilliant and a great guy. I'm a big fan of Eric Rudder, which probably sounds funny coming from someone from BEA. Eric and I have a lot of mutual respect for each other. But he's trying to live up to a vision that in my opinion is mostly not in the customers' interests and that is a thick client. So I think the client's going to get thicker but will still basically be a thin-client paradigm. And I think that will be true because the customers, given the ease of use of the a radio, don't move away. Let me give you one other example. Word processing. I used to use this command-oriented word processor called Volkswriter. And other people used Emacs. It had all these powerful commands you could type in. And along came the WYSIWYG word processors. They were cooler and they printed much better, but I lost half the power I had. I could no longer say search for everything that contains this and then when you find it modify this part to include this note—things I could've said in the old command-oriented language. And no one ever went back. To this day word processors don't let you do that. Because it turned out there were 10 times as many people that needed the ease of use as needed the custom stuff.
Thursday, February 19, 2004
Microsoft is moving ahead with plans to deliver its XML programming language to programmers inside and outside the company. By mid-year, Microsoft plans to make its XML programming language – currently code-named "Xen," and soon to be renamed "C Omega" [X Omega ?]-- available to researchers and academics. At the same time, Microsoft's Research division is making the new language available to product teams inside the company for possible use or inclusion in forthcoming products.
Tuesday, February 17, 2004
This gets me thinking about hypercube projections of increasing dimensionality as backtrack-free generative sequences of morphological unfolding, talked about by Christopher Alexander
From that follows that, if we represent n-dimensional data structures, we'll have to create projections. Projections are easy stuff, mathematically speaking (i.e., they involve fairly simple vector math). Visualizing them is not too difficult either. For example, consider hypercubes, which are one of the easiest cases because they're fully symmetrical graphs. For example this is what projections of hypercubes of dimensions n > 3 into 2D look like [source]:
A 2D projection of a, say, 12D space might be pretty to look at, but I think most users would avoid that kind of complexity and its consequent cognitive overload.
Friday, February 13, 2004
Mike Pusateri, Elisabeth Freeman and Eric Freeman at Disney shares their enterprise blogging initative. Its very similar to the experience we have had with Socialtext, without the integration of blog and wiki with enterprise requirements in mind. The focus on blogging for project communication instead of just individual expression is spot on.
The Fluidtime project wants to contribute to a society where people's activities flow into each other, where there is less waiting and more time for contemplation and idling and where people flow with their time and not against it. We do this by stimulating debate, developing methods and creating services and tools for event based timing. We want to link people to the 'real' time of events and appointments through interfaces that are simple and effective.
Wednesday, February 11, 2004
Wikis as common workspaces. Polls, and any discussions where there is a desire to "sign" ones name should be done on mailing lists. Mailing lists should be used for new discussions. Revisiting and revising should be done on a wiki. Weblogs for summaries, progress reports, and to highlight areas of current focus Versioned specifications, test suites, and running code provide stability.
But a payoff even greater than enterprise whuffie will emerge as groups begin to manage themselves without IT involvement. Today's social software is a big step in that direction, using browser-based tools to create Wiki pages and blogs simply by typing in a name and, in SocialText's case, allowing RSS aggregators such as NetNewsWire to track changes with color-coded edits. SocialText's recategorization technology allows for broadcasting posts to multiple blogs and therefore RSS streams. But aggregating external feeds within the software has yet to be implemented. When social software can collect RSS subscription and consumption patterns and apply the aggregated results to dynamic indexes of internal and external microcontent, the resulting real-time streams will fuel the next generation of enterprise business intelligence.
Tuesday, February 10, 2004
Our mission in the Social Computing Group is to research and develop software that contributes to compelling and effective social interactions, with a focus on user-centered design processes and rapid prototyping.
Our work includes the Sapphire project, sharing, mobile applications, trust and reputation, collaboration, and story telling. To facilitate the rapid prototyping, we also have an online lab for running studies to evaluate our social user interfaces.
Thursday, February 05, 2004
I recently had a chance to look at IBM's direction in content management. There are several solutions for content management in IBM's portfolio right now. Some of them can work together. Some of them have overlaps. And some of them do not have good integration with others...
Strategic plan is (as I understood it) to leverage JSR 170 to provide general unified interface to all content storages/products. As addition to JSR 170, WebDAV can be used to some extent.It looks like JSR 170 will address basic features important for content management in Java world. I personally is interested in comparison JSR 170 with Microsoft's WinFS (and both with Topic Maps).
Last year I created a network map of political books based on purchase patterns from major web book retailers. The network revealed a divided populace... at least amongst book readers. . . . It appears that the many of the books have changed from last year but the pattern is the same. Two distinct clusters, with dense internal ties have emerged. These political books are preaching to the converted! This year we find more bridges between the clusters. Yet, this network of 67 books is dependent on just 2 nodes to remain connected -- Sleeping with the Devil and Bush at War.
Wednesday, February 04, 2004
. . . Well, remember the 4 pillars of service orientation:
- Boundaries are explicit
- Services are autonomous
- Services share schema and contract, not types
- Policy based compatibility
Ross Mayfield (CEO, SocialText): Process breakdowns require that people route around the failure in process and get things done. Informal networks are how companies get work done. Social software is about strengthening those informal networks. Social software gets the information out of email and attachments and brings it "above the fold" so that it can be used, linked, and indexed. . . .
Ad: Doing business is making friends. Helping people understand other cultures is important.
Mark: CRM systems try to put structure around unstructured data. There's real potential for using social software to create the next generation of CRM systems.
Chris Roon: eBay is a great example of social software applied to a particular purpose.
Me: CRM systems follow Sarnoff's law: the value of the system is linear with the number of contacts. We don't let people build communities within our CRM systems. Thus we never see Metcalf or Reed scale benefits.
Ben: these kinds of systems have to work bottom-up because people own their relationships.
Tuesday, February 03, 2004
John Seely Brown, former director of the Xerox Palo Alto Research Center, says he believes that recent changes in software technology could allow big gains in productivity and innovation. The opportunity, he says, is to move beyond the limitations of centralized systems for automating business operations, like enterprise resource systems. "Those systems are prisons," said Mr. Brown, who is scheduled to speak at today's conference.
The software plumbing of computing, Mr. Brown explains, is evolving, and so is Internet-based software for individual workers. Software systems built on Web standards, he said, can be used as pick-and-place building blocks, instead of the more formal hierarchical systems of the past.
Mr. Brown also points to the rapid development of what he calls "social software" like instant messaging, Weblogs, wikis (multi-user Weblogs) and peer-to-peer tools - all of which make it easier for workers to communicate and collaborate online, almost instantaneously.
The combined result, Mr. Brown said, is information technology that can amplify social interaction and enhance workers' understanding of what is happening around them. The benefit, he added, could be to increase their ability to "collectively improvise and innovate."
That is a key to productivity and peak performance, according to Mr. Brown. Business, he said, is a lot like soccer. In soccer, there are some set plays, but the best teams also display a wealth of effective improvisation based on the players' deep knowledge of one another. "It's the same in the best corporations or start-ups," he said. [New York Times: Technology and Worker Efficiency, by Steve Lohr
Sunday, February 01, 2004
By evaluating an API with respect to each of these dimensions, you can form a picture of what the API looks like overall with respect to the framework. We like to create pretty graphs to do this. For example, here is a graph of an evaluation of some fictional API:
The black dotted line in the graph represents the API being evaluated. The spokes in the 'wheel' represent each of the different dimensions. End points (the middle and outer edges) on each spoke represent the end points on the scale for each dimension. Thus the point on each spoke where the black dotted line crosses it corresponds to the point on the scale for that dimension that the API has been valued at.
. . . The blue line on the graph represents the profile of a particular developer persona. For example, it shows that this persona prefers APIs that expose aggregates (the outer edge on the abstraction level scale) and APIs that have small working frameworks (the inner edge, or middle, of the working framework scale). If you compare the points where the blue lines cross each spoke with the point where the black lime crosses each spoke, you get an idea for how well the API matches the persona's ideal API. Hopefully you can see that this particular API doesn't really hit the mark for this persona (there are big differences between the points where both lines cross on most, if not all, of the spokes in the graph).