<$BlogRSDUrl$>

Thursday, May 26, 2005

Brain learning and development 

THE NEUROSCIENCE AND PHILOSOPHY WORKSHOP 2
Grossberg then pointed out that learning in the two systems seems to be complementary, in the following sense. Sensory and cognitive expectations are matched via an excitatory matching process; cf., the example of looking for a blue pencil within a second for a large monetary reward. Motor expectations are matched via an inhibitory matching process. Such a motor expectation, in the hand-arm system, for example, codes where you want to move. It is matched against where your hand is now. When your hand is where you want it to be, you stop moving. Hence motor matching is inhibitory. The corresponding learning in the sensory and cognitive domain is "match" learning; in the motor domain it is "mismatch" learning. Match learning solves the stability-plasticity dilemma; it allows us to keep learning rapidly throughout life without also suffering rapid and inappropriate forgetting. Mismatch learning does suffer catastrophic forgetting, but in the motor domain, this is good, since we only want to remember the most recent motor control parameters, that are appropriate for controlling our bodies as they are now, not as they were, say, when we were children.

Grossberg remarked that, if you study lots of brain architectures from a fundamental point of view, you see over and over again that they are organized into complementary processing streams, which individually realize a hierarchical resolution of uncertainty. He predicted that these complementary streams arise through a process of global symmetry-breaking during development. Complementarity and uncertainty principles are also found throughout the physical world, and he interpreted their occurrence in the brain as being a manifestation of the brain's role in measuring and modeling the physical world in an adaptive fashion.

THE NEUROSCIENCE AND PHILOSOPHY WORKSHOP 3

Eichenbaum's first major philosophical claim is that there is no coding purely for the function of memory. Rather, memory is simply the modification of the normal process of representation or coding that occurs as part of perception or interaction. There is nothing that functions like a tape recorder, no storehouse of memory. There is no strict separation, Eichenbaum claimed, between perception and memory. What we call memory is best explained as a modification of perceptual coding properties.

. . .

There are two ways to form a memory, by repetition and arousal. The hippocampus, according to the questioner, might play a role in repetition pathways, whereas memories mediated by the amygdala might account for the arousal circumstance.

. . .

Eichenbaum, began by emphasizing that there is a great deal more plasticity in the hippocampus, it is the best and the quickest of the memorizers. The cortex, by comparison is a slow memorizer, remodeling itself very gradually. This two component model turns out to be an engineering solution to the problem of catastrophic interference. Neural network models are notoriously susceptible to catastrophic interference. McClelland overcame this problem by juxtaposing a more rigid network over and above the highly plastic and fast-acting learning device.

Topics: Learning | Development


(0) comments

Wednesday, May 11, 2005

Subtext: A name by any other name 

See:Subtext

Marc Hamann

The power of a name is that it is just a formal entity to which a value can be bound, but that can enter into logical relations and constraints before it is yet bound.

In the subtext demo (I didn't explore beyond that) you had to create a specific value first, and then you could could link it elsewhere. The fundamental concept of abstraction ( defining relationships or properties that exist independent of particular values ) is missing, except in the weakened form of copying a pre-existing relationship and plugging different values in it.

If a name only exists when it is already bound to a particular value, it isn't really a name, and you don't have real abstraction in your language.

Jonathan Edwards

I am in the process of unifying copying with linking, so that you can dynamically re-parent a structure based on a linked reference, which results in higher-order functions. I hope that will earn me some respect with the functional programming crowd.

There is more detail in the paper (see higher-order copying), but frankly there are still a lot of issues to resolve. Subtext is still very much a partial sketch of an evolving idea.

. . .

Some object that my links are just pre-bound names, but that is exactly the point. I have removed name binding from the language. Binding is completely up to the programmer (or metaprogram) to do in whatever way they like when the program is constructed, and is outside the language semantics. Attaching ASCII identifiers is an optional convenience. By eliminating name binding, you don't have to wait till compile-time or run-time to see what the links mean. This is crucial to seeing abstractions in code as living examples that adapt to their context.
< Topics: Representation | Language


(0) comments

Wednesday, May 04, 2005

Long anticipated, the arrival of radically restructured database architectures is now finally at hand. 

Jim Grey
Application developers still have the three-tier and n-tier design options available to them, but now two-tier is an option again. For many applications, the simplicity of the client/server approach is understandably attractive. Still, security concerns—bearing in mind that databases offer vast attack surfaces—will likely lead many designers to opt for three-tier server architectures that allow only Web servers in the demilitarized zone and database systems to be safely tucked away behind these Web servers on private networks.

. . .

All major database systems now incorporate queuing mechanisms that make it easy to define queues, to queue and de-queue messages, to attach triggers to queues, and to dispatch the tasks that the queues are responsible for driving.

. . .

the tendency is to implement publish/subscribe and workflow systems on top of the basic queuing system. Ideas about how best to handle workflows and notifications are still controversial—and the focus of ongoing experimentation.

. . .

The key question facing researchers is how to structure workflows. Frankly, a general solution to this problem has eluded us for several decades. Because of the current immediacy of the problem, however, we can expect to see plenty of solutions in the near future. Out of all that, some clear design patterns are sure to emerge, which should then lead us to the research challenge: characterizing those design patterns.

. . .

Now the time has come to build on those earliest mining efforts. Already, we’ve discovered how to embrace and extend machine learning through clustering, Bayes nets, neural nets, time-series analysis, and the like. Our next step is to create a learning table (labeled T for this discussion). The system can be instructed to learn columns x, y, and z from attributes a, b, and c—or, alternatively, to cluster attributes a, b, and c or perhaps even treat a as a time stamp for b. Then, with the addition of training data into learning table T, some data-mining algorithm builds a decision tree or Bayes net or time-series model for our dataset.

. . .

Increasingly, one runs across tables that incorporate thousands of columns, typically because some particular object in the table features thousands of measured attributes. Not infrequently, many of the values in these tables prove to be null. For example, an LDAP object requires only seven attributes, while defining another 1,000 optional attributes.

Although it can be quite convenient to think of each object as a row in a table, actually representing them that way would be highly inefficient—both in terms of space and bandwidth. Classic relational systems generally represent each row as a vector of values, even in those instances where the rows are null. Sparse tables created using this row-store approach tend to be quite large and only sparsely populated with information.

One approach to storing sparse data is to plot it according to three characteristics: key, attribute, and value. This allows for extraordinary compression, often as a bitmap, which can have the effect of reducing query times by orders of magnitude—thus enabling a wealth of new optimization possibilities. Although these ideas first emerged in Adabase and Model204 in the early 1970s, they’re currently enjoying a rebirth.

. . .

all of these data types—most particularly, for text retrieval—require that the database be able to deal with approximate answers and employ probabilistic reasoning. For most traditional relational database systems, this represents quite a stretch. It’s also fair to say that, before we’re able to integrate textual, temporal, and spatial data types seamlessly into our database frameworks, we still have much to accomplish on the research front. Currently, we don’t have a clear algebra for supporting approximate reasoning, which we’ll need not only to support these complex data types, but also to enable more sophisticated data-mining techniques. This same issue came up earlier in our discussion of data mining—data mining algorithms return ranked and probability-weighted results. So there are several forces pushing data management systems into accommodating approximate reasoning.

. . .

The emergence of enterprise data warehouses has spawned a wholesale/retail data model whereby subsets of vast corporate data archives are published to various data marts within the enterprise, each of which has been established to serve the needs of some particular special interest group. This bulk publish/distribute/subscribe model, which is already quite widespread, employs just about every replication scheme you can imagine.

. . .

It turns out that publish/subscribe systems and stream-processing systems are actually quite similar in structure. First, the millions of standing queries are compiled into a dataflow graph, which in turn is incrementally evaluated to determine which subscriptions are affected by a change and thus must be notified. In effect, updated data ends up triggering updates to each subscriber that has indicated an interest in that particular information.

. . .

Indeed, if every file system, every disk, every phone, every TV, every camera, and every piece of smart dust is to have a database inside, then those database systems will need to be self-managing, self-organizing, and self-healing. The database community is justly proud of the advances it has already realized in terms of automating system design and operation. The result is that database systems are now ubiquitous—your e-mail system is a simple database, as is your file system, and so too are many of the other familiar applications you use on a regular basis.

. . .

algorithms and data are being unified by integrating familiar, portable programming languages into database systems, such that all those design rules you were taught about separating code from data simply won’t apply any longer. Instead, you’ll work with extensible object-relational database systems where nonprocedural relational operators can be used to manipulate object sets. Coupled with that, database systems are well on their way to becoming Web services—and this will have huge implications in terms of how we structure applications. Within this new mind-set, DBMSs become object containers, with queues being the first objects that need to be added. It’s on the basis of these queues that future transaction processing and workflow applications will be built.

Topics: SQL | Workflow


(0) comments

This page is powered by Blogger. Isn't yours?