Wednesday, October 28, 2009

Chapter 4

To say some piece of art is "great art", I think you need a single characteristic: the ability of many people to interpret the same piece in totally different ways. This is one of the craziest things about high level perception -- the fact that, when you take some objects (tangible or non-tangible), glue them together with some sticky relations, and then recognize that as a "situation" or brand new entity which may be regarded as X to you, but Y to your friends, you've got high-level perception. This is what brought us down from the trees and into our suits and ties. Sure, the sun rises on one side of the horizon and sets in the other -- but what does that mean?

How do you get your computer to produce these same kinds of meanings? Well, for a while, it was easy enough to hard code the knowledge into them.. But, of course, this is cheating. We want the computer to take data and abstract things like objects, relationships, probabilities, and meaning from the whole picture, right? Hofstadter says that figuring out how to appropriate represent a set of data is absolutely essential to discovering how humans process the information. This is a common mantra in the AI communities..

AI researchers have long tailored their data to match their algorithms.. The idea being that in order to emulate cognitive processing, one is able to leave out the mushy perceptual details by defining a more rigid stimulus of their own. In other words, you can remove the low-level input's original state and replace it with a nicer one for the higher-level systems to work on. Obviously, this makes things easier to watch when you're doing research. Hofstadter says you shouldn't be mad at the AI people for this.


Doug is fuming angry (preface 4)

Hofstadter starts by recalling a news article which appeared in Science. The article praised a recently developed program based on "structure-mapping theory" (SME). Supposedly, it could 'understand' concepts and make analogies between these concepts -- such as heat movement and water movement. Sadly, it was all really just clever programming on the part of the researchers, and the program really didn't know what it was doing. It's job was just to manipulate symbols in a pre-defined manner.

Next, there is discussion about an article in the New York Times in 1993, which describes a program which can write novels by asking questions to the host of the machine (a person). It was hailed by many as a great marvel of the modern age.. a computer .. that thought up stories! But it really wasn't understanding anything, as the article said. Doug was frustrated and angry at these statements -- he even wrote a letter to the NYT.

The final portion of the preface talks about the COPYCAT project. Hofstadter says he and his team would love to talk all day about COPYCAT, but he realizes that any researcher loves to go on and on about their studies.. I guess they might've felt a bit ashamed at their little program, when so many others were making headlines. But, it looks like there's still a few hundred more pages to go, so I bet something interesting is around the next corner.

Monday, October 5, 2009

Numbo - pgs 127- 138

Daniel Defays, during his sabbatical year at Ann Arbor with Douglas Hofstadter in the late eighties, designed and implemented a puzzle-playing program based off of ideas about human problem solving. His game "Le Compte Est Bon" ("the total is correct") is generalized as follows: there are five randomly chosen integers and one larger integer, the goal is to use addition/subtraction/multiplication to make the five equal to the one.

It is trivial to enumerate all of the possible answers using a computer program to model the "Numble" world. But how to do it with goals of discovering how the human mind works on these problems?

Defays architecture utilizes the rote learning of arithmetic tables we are all subjected to memorizing in grade school. He considers these to be part of a larger system which teases apart the problem in our brain. I like to think of it as a form of preprocessing. When the program first encounters a problem, it may try to use a top-down approach -- starting with the goal and then compiling clusters of numbers and operations which fall within range of the goal. Beyond the arithmetic tables of 0-12, there are also some "landmark" numbers which appear to have special appeal to us humans -- digits of 2, 5, 10, exponential numbers, etc. These can be exploited to get within range of similar numbers. The book uses the example of getting in range of 146 by noting that 144 = 12 x 12, this is a reasonable starting place for problem solving.

Wednesday, September 30, 2009

pgs one one one to one two six.

Hofstadter says that humans do not attend to two rival possibilities to a problem at exactly the same time. He notes the necker cube, vase-face, and other illusions as examples. While we are able to switch between different interpretations of the same stimuli, our perceptive systems don't allow us to use both at the same time. It may be the case, though, and Hofstadter mentions this also, that there is an unfelt parallelism going on behind the scenes -- else, where would any of these thoughts be bubbling up from?

Next, he continues his thoughts of gloms. He says that his program has different kinds of gloms -- one for syllables, vowel/consonant clusters, and others. Each of these can feel "happy" or "sad". According to how a glom feels, it will be more or less likely to be attended to (I visualized a room full of gloms in the form of crying babies for some reason...).

Hofstadter describes different ways that gloms can be arranged and more importantly, transformed. He cites the spoonerism, which is switching the consonants of the first two structures. Another noteworthy technique is exchange of syllables -- where each of the syllables is shuffled around in a different order. And also reversals, kniferisms, and other techniques he thinks would be reasonable to believe each of us might employ while doing a jumble.

A final article in the chapter uses analogies from physical science to describe how Jumbo is able to keep itself self-organizing. The happiness levels of gloms are similar to the statistical mechanics of natural rules. And the words that are formed out of this probabilistic chaos are similar to the macro-level thermodynamics we see as a result. Ideally, AI folks probably want to explain thoughts in terms of the macro-level, excluding the messiness of neurons and chemistry as much as possible. Personally, I don't think this will ever be possible. Why? Because, you can't understand how a cloud acts (how a thought is made, fades, or becomes a memory) without understanding the chemistry of water, heat or understanding how the wind alters the cloud's world.

Wednesday, September 23, 2009

Hofstadter is kind-of a romantic.

Jumbo is an artificial intelligence endeavor with aims to model cognition as a series of concurrent modules all sort-of aiming at the same place. At least in problem solving. Against my frowning, Hofstadter mentions that he does not seek to understand the microscopic mechanisms underlying brain activity and thought, but rather would try to pick apart the symbolic strata of information being pushed around in the mind.

Jumbo's problem domain is anagrams (a set of jumbled letters which must be arranged into a real word in order to solve the puzzle). Instead of having the program simply sort out the options from the full list of possible permutations, Hofstadter's architecture attempts to use a technique similar to chemical bonding in the natural world (also similar to symmballs..).

By visualizing that each letters has a set of "good" fits and "bad" fits, such as "t" to "h" to fit into "th", Jumbo builds words out of many smaller parts -- called gloms. Letters are weighted according to their use in the language -- English in this case. Simple on paper, but probably required alot of effort to balance in a way that produced reasonable results.

I especially like Hofstadter's intuition about imagining mind activity in terms of analogies of nature. In fact, it seems like one of the things he's best at is showing us that sort-of Zen quality of the universe -- that everything is related. He says that, like the chemical bonds found in the natural laws, a letter is able to shift between gloms should the need arise. A letter is also very attracted to some other letters but not very attracted at all to other letters -- just like certain molecules.

Monday, September 21, 2009

pgs 87-95.

Doug Hofstadter spends a bit of time talking about his work on the LISP program JUMBO, which was an anagram solver. A very interesting mention is made of his conceptual idea of the parallel terraced scan, which is a strategy used by a program to detect properties up the scale of a schema. These scales each have the ability to pattern match alongside the bottom-level program which is actually running the entire time. This provides one with the ability to conserve computer resources (or brain energy) by only running modules when the time is right (as in .. when the preconditions are met).

This scaling reminds me of the screaming demons architecture I learned about back in Cog166. The model went as follows: there is a set of demons which will scream when a specific symbol is cast before them, furthermore, there is a set of demons which will scream when a specific set of demons beneath them is screaming ... you can see the bottom-up approach visually. But those second-third-whatever level demons would only activate any screaming when the lower level gave them a specific input. I am reminded more, now, of a tree structure in computer science ... with the root giving me the answer I needed. I know that's not quite right for most data structures, but the visualization seems to fit here. My closing comment on this reading is this: the mind appears to work as a cluster of chaos that quickly can organize itself into something meaningful.

Monday, September 14, 2009

Islands

The human mind's ability to problem solve is quite different from a computer programs in the fact that humans have slippery paradigms which may be shifted when examining a problem. This is much harder to do for a computer program, as only the problem presented may tell if it is time to shift the importance of a technique forward or back on the scale. As Hofstadter says, the computer program fails at the figure/ground switch.

His reaction to this rigidness in computer program structure is an attempt to allow fluid templates to alter importance of pattern-finding techniques in Seek-Whence's architecture.

The human mind works all at once, and as needed. There is no single CPU running the tasks of the operating system, there is (to use the computer analogy further) countless processing units taking on very small subtasks and calculations, clustering together to push information to the next set of units, and so on..

By this fact, it is not hard to see that our mind's may make use of both the number-savvy (expert knowledge) and pattern-sensitivity (theory-based knowledge) techniques temporally parallel but intersecting at the right moments. This may create the illusion of a single stream of consciousness which branches along a set of possibilities that all appear to be related to one another, but I don't think this is actually what is going on in the brain.

I think that there are a very large number of solutions which are posed and logically worked out simultaneously, with almost all of them quickly being downvoted in the first few milliseconds of thought. From there, the remaining set of solutions is positioned for scrutiny in the form of truth from knowledge and also for patterns.