Wednesday, September 19, 2007
Bob DuCharmelinks to this post (0) comments
Who assigns this metadata, and why do they do it? You have three choices: people who do it because they're paid to, people who do it because they want to, and automated processes.
. . .While some metadata is free, such as the size of a file and the last time it was edited, creation of new metadata is never completely free. If you're not paying people outright, you must come up with and then implement some system that makes people want to do your metadata data entry without being paid.
. . .if it doesn't make your life more fun, it better do something to make your life easier. Coming up with that incentive is the real silver bullet, if you want to avoid writing a check for human labor or automated systems to do this work for you.
Friday, September 07, 2007
Yasser Roudi*, Peter E. Lathamlinks to this post (0) comments
A critical component of cognition is memory—the ability to store information, and to readily retrieve it on cue. Existing models postulate that recalled items are represented by self-sustained activity; that is, they are represented by activity that can exist in the absence of input. These models, however, are incomplete, in the sense that they do not explain two salient experimentally observed features of persistent activity: low firing rates and high neuronal variability. Here we propose a model that can explain both. The model makes two predictions: changes in synaptic weights during learning should be much smaller than the background weights, and the fraction of neurons selective for a memory should be above some threshold. Experimental confirmation of these predictions would provide strong support for the model, and constitute an important step toward a complete theory of memory storage and retrieval.
. . .we no longer know how the brain holds so many memories. Roudi and Latham speculate that the answer lies in multiple, weakly coupled networks. However, until that, or some other idea, is shown to be correct, we will have to be content with just remembering, without the added knowledge of how we remember.
oceanlinks to this post (0) comments
Excessive layering plagues many Java applications because they submit to the component and container dichotomy. Containers inevitably produce layers because they are so darned intrusive; combined with so-called good programming practices the result is walls have to be thrown up to protect the mythically valuable business logic from the implementation details of the Container. There's been an enormous pushback against intrusive Containers that's culminated in JPA: it's now finally possible to write fairly complex Java applications that do non-trivial database work without messing with deployment descriptors, interfaces, and other container gobbleygook. This pushback against intrusive containers should evolve into a pushback against layers. You can already see this in the latest next-generation frameworks like JBoss Seam where all sorts of magic is employed to seamlessly (one wonders why they called it Seam?) merge the view, domain and persistence layers into a single coherent body.
. . .Rather than N-tiers of a single monolithic application you're much more likely to end up with M-islands of services connected by some sort of bus (in SOA) or by a uniform interface (in REST).
Tuesday, September 04, 2007
Richard Hillesleylinks to this post (0) comments
yet the GPL not only proved to be the best protector of the principles of free software, but was also the most business friendly of the licenses available, for one simple reason - companies like IBM, HP, and SGI could contribute openly to the kernel, in the knowledge that the developments of their competitors would also be fed back to the community - Paradoxically, the viral clause, the part of the license that so many people objected to because it wasn't "business friendly", made the license business friendly, worked to the mutual advantage of all contributors, and to the benefit of the project as a whole. The recognition and support of the GPL and its variants by diverse organisations has propelled the open source model beyond software. To belittle the contribution of the GPL to the success of Linux is perverse. To argue that now is the time to abandon the GPL (as Raymond suggested in 2005, and others have argued more recently) is just wrong. Time will tell, but the anti-DRM and software patent clauses in the latest version of the GPL may prove to be as necessary and successful an innovation for the spread of free software as was the viral clause in GPL 2, which encouraged (or enforced) the notion that those that took advantage of the software should also contribute their changes back to the community, and which, 10 years or so ago, so many people took exception to... in the future, the notion of liberating business from the drug of DRM and the prison of software patents by means of a software license may turn out to have been equally prescient and "business friendly".