Showing posts with label intelligent systems. Show all posts
Showing posts with label intelligent systems. Show all posts

Wednesday, April 6, 2011

Best day... ever?

I was hoping to write a proper post about this, but I am somewhat too tired to do so, so I'll resort to posting a schedule of the day.

9:00 AM: Brain Day starts, introductions to talks by...

9:15 AM: Sebastian Seung (MIT) [I am my connectome TED Talk, on Youtube].

10:45 AM: Peter Strick (Pittsburgh): Motor control and basal ganglia / cerebellum topography.

1:30 PM: Jonathan Cohen (Princeton): Adaptive cognitive control.

3:00 PM: Ned Block (NYU, Philosophy) [On Consciousness Youtube clip].

[End of Brain Day]

7:00 PM: Perimeter Institute lecture by Roger Penrose [Discover Magazine article]

10:00 PM: Read web comics. There was a good crop today!

EDIT: Apparently Ryan North of Dinosaur Comics was also at the Perimeter Institure lecture. I'm kinda disappointed I spent too much time listening to the lecture to notice this.

Thursday, October 7, 2010

Context-free grammar (Part 1 of 2)

What are context-free grammars? An abstract definition might say that they're a set of rules allowing for the construction of complex structures consisting of logical parts that can be arbitrarily nested, but I've found that nobody really likes my abstract definitions, so I'll try using an analogy instead.

Consider a freight train hauling three containers (containers A, B, and C):

Let's say that container A can only contain livestock, container B transports liquids, and container C contains manufactured goods. It doesn't matter what type of livestock, liquids, and manufactured goods are transported so long as they fit into their respective categories.

We have just defined a very simple grammar. Within this structure, it is possible to have 'phrases' such as "pigs milk computers" or "cows mercury needles".

A cow-mercury-needle train and pig-milk-computer train collision would be unfortunate.

Both the cow-mercury-needle train and the pig-milk-computer train are examples of the same model of train — an A-B-C train — that happen to be carrying different items in their identical containers. The model of the train can be thought of as one rule (or phrase structure) in the grammar, but it is certainly possible to have other rules.

The analogy gets a bit weird here (yes, weirder than the above image) because context free grammars also allow their containers to contain other trains.

Let's imagine a different model of train, the B-A-Weird train that has a B-container (which must, as before, contain a liquid), an A-container (holding livestock) and a weird container which holds either an A-B-C train or a B-A-Weird train. Now instead of a pig-milk-computer train, it would be possible to have a milk-pig-cow-mercury-needle train where the milk is in compartment B, the pigs are in compartment A and the cow-mercury-needle train from above is all in the weird compartment.

Okay, that's pretty weird, but that's not nearly as strange as being able to put a B-A-Weird train inside the Weird compartment. This could allow us to take the entire train we just discussed and stick it in the last compartment of another B-A-Weird train. Then we could take that train and do the same thing, over and over, as much as we would like. However, at this point the train analogy has lost almost all meaning and is likely confusing matters substantially.

Fortunately, there is a familiar application for context-free grammars that should make more sense now, namely language. By specifying containers in terms of parts of speech instead of cargo, we can construct sentences instead of trains. This is sort of reminiscent of Mad Libs: "the {noun} {past tense verb} the {noun}" would be one example of a sentence structure from a context-free grammar. Amusingly, this is also how Matlab's "why" function manages to produce output such as "The bald and not excessively bald and not excessively smart hamster obeyed a terrified and not excessively terrified hamster."

As before though, things get interesting when sentence structures get put inside of other sentence structures (just as trains... get... er, put inside of other trains). We can expand the definition of nouns to include phrases in the structure "{noun} the {noun} {past tense verb}". This allows phrases such as "man the hamster ate" to be used in place of the simple noun "man". And why not? Anywhere you can refer to a man, you can grammatically refer to a specific man who happened to be eaten by a hamster.

There is a slight problem with this setup in that, as before, things get complicated when structures are continually placed inside other structures. The phrase
"The dog the boy the man the hamster ate raised owned bit the boy the man the hamster ate raised."
is in fact valid using the two rules defined above and can be parsed as:
The (dog the (boy the (man the (hamster) ate) raised) owned) bit the (boy the (man the (hamster) ate) raised).
...meaning the hamster ate the man who raised the boy who owned the dog which bit the boy raised by the man eaten by the hamster.

So... yeah, people don't generally speak like that because of cognitive limits to how many nested levels of information we can keep track of at once. But maybe, you say, this idea of context-free grammar could still be useful for creating different kinds of structure. Some sort of really complex structure that can have parts embedded in other parts in ways that are hard for people to imagine. Perhaps, you hazard, some sort of biological structure.

In this last set of thoughts, you've outlined the general idea behind GenoCAD, a context-free grammar for synthetic biology. It seems to be a good fit at first but, as Andre from the UW iGEM team points out, there are quite a few properties of DNA sequences that it fails to capture. More on this later.

Friday, September 24, 2010

Popular Neuroscience

This is a personal blog, it is not necessarily a reliable source of information.

I don't normally talk about the work I do on co-op terms, because I don't want to accidentally offend my employers by telling everyone how horribly nice they all are or making the similarly unfortunate error of posting confidential information. However, as I'm currently working for a neuroscience lab and it is pretty awesome I figure it couldn't hurt to write a series of posts about neuroscience stuff. Now, having said all that, here's a confidential training video:



Okay, so that video wasn't actually confidential, but I did steal the link from our lab forum. Now, Cleese's talk is pretty advanced (though quite entertaining), so I'm going to take a step back with a short Q&A (I'm not one to pass up an opportunity to talk to myself).

Q: What is neuroscience?

A: It's the study of the nervous system, nominally. In the long run, neuroscience, psychology, cognitive science, biology, and computer science are all intricately linked in a crazy quest to understand the way we think, the way intelligence and information processing in general works, and how it all somehow works with physical, biological components.

Q: Is neuroscience really that similar to cognitive science? Isn't it really just biology, but concentrating on the nervous system / brain?

A: I'm not very good with all the research classifications in the area, but it seems like most of what's being published under the label of neuroscience is neurobiology. From an experimental perspective this certainly makes sense: the brain is biological, after all, so obviously a substantial amount of the work being done in reverse-engineering it involves biology. Ultimately though, we're largely interested in the functionality of the brain and the way it gives rise to the interesting, complex interactions we all know and love: calling this functionality 'cognition' and attempting to understand said functionality is more or less what cognitive science is about. There is a big gap in understanding currently between neurobiology and cognitive science which is (to grossly and incorrectly simplify things) what theoretical neuroscience is puzzling over.

Q: tl;dr

A: Grr. Ok, look: biological understanding of brain = improving, but massively, insanely complex already and not capable of explaining brain function. Cognitive science = good at finding ways to do specific things; lacks general theories and can't compete with real brains. Gee, it sure would be nice to have people working on some combination of the above things! Those people would definitely deserve enough funding to pay their co-op students. Big time.

Q: I'm not convinced. What's the point of taking multiple different (and possibly conflicting) approaches?

A: First, which one's 'right'? I dunno. That's pretty much answer enough: they both provide more information, thus they're both valid approaches. If they happen to conflict, even better: there would then be an opportunity to investigate and fix problems that are found with the theories. That's pretty much how science works.

Q: Fine, fine. You can start at either the biology or psychological/cognitive levels, I get that. How could you possibly work from the midpoint between the two levels outwards?

A: This conversation makes more sense when it's less abstract. Instead of talking in terms of a 'midpoint' between two different research areas, I should really have framed this as a need to go beyond current empirical biological data to build more functionally complex models. Instead of making hypotheses based solely on raw data, you can start making predictive models constrained by assumptions, information processing requirements, simplifications required for tractability, and so on...

Q: Sorry to interupt, but isn't this post way too long already?

A: Yes.

Wednesday, April 7, 2010

Philosophy of Mind — Part 1: Qualia

I really wish I could say that what you are about to read is a well-thought out dissertation on such deep and intellectually stimulating topics as human consciousness, the biological basis for memory, and neural representations of meaning, but as you can probably tell from the sketch above, this post is more of a lark than a serious attempt to explore philosophies of the mind. Still, it's a topic I find pretty interesting so hopefully this will degenerate into a more worthwhile stream of consciousness post than my typical rant.

Disclaimer: I might at times make claims in this post that make me seem like I have a clue what philosophy of the mind is about. This is the internet, however, and I don't cite any sources. Reader beware.

Qualia are probably as good a starting point as any other. The word 'qualia' (pronounced kwalia, presumably after quality) is a term that describes — and you may want to brace yourself against the forthcoming hand waving — the subjective essence of an experience, the quality of a conscious sensation, the purplishness of purple being one example (although the "redness of red" is a more common example). At first, qualia don't seem to be overly interesting: it's hardly surprising that I can perceive redness or purplishness, or any of the other sensations I'm equipped to perceive, but the idea behind qualia is not simply that I notice when objects are purple but that this perception evokes a unique sensation that I can only appreciate due to my consciousness (and that a video camera would therefore not experience upon seeing the same purple).

You can almost feel that purplishness oozing into your mind, can't you?

If you're anything like me, qualia still probably seems like a somewhat esoteric and possibly useless concept at this point. It's loosely defined, not exactly a testable quantity, and seems to succeed mainly at evoking thoughts of sensations instead of helping to understand the process behind these sensations. So why have I spent so much time talking about qualia? Because the idea that every experience has this property, qualia, that is so familiar to us and yet indescribable is both extremely pervasive and influential. Dualism, the belief that thinking, intuition, sensation, and logic are made of a mind-substance that is completely different from physical matter is one attempt to deal with preconceptions about qualia. Dualism is generally discredited these days, thanks to science's proclivity for materialist theories, but I'm probably getting ahead of myself. The point is that consciousness is such a complex process that we have a catch-all term for the mindbogglingly indescribably properties of everything.

It's now getting to the point where I'm experience the sensation of sleepiness in an intrinsically indescribable way, so I'm going to have to wrap this up abruptly. Clearly, I haven't done this topic justice, but if you're moderately interested then Eliasmith and Hofstadter (among many other) are interesting guys to check out. Also, this page about qualia inversions (à la inverted spectrum) is a better introduction to the way the term qualia is actually used by philosophers.

Oookay. Sleepy times.

Friday, January 15, 2010

Webpages 'n' Competitions 'n' Stuff

If I haven't been posting a lot lately it's because I've been giving the Waterloo iGEM team wiki a shiny new theme, mulling over the Intelligent Systems Challenge, and going to my actual job.

I don't have much else to report on, although the iGEM forums linked to a really cool story about oil drops which I'm shamelessly reposting here. This experiment is right up there with Millikan's famed oil drop experiment in terms of scientific oil experiments that utterly blow my mind.

Tuesday, December 29, 2009

Artifice and Intelligence

Fun video of the day: ABB FlexPicker Robots (Youtube)

In other news, I recently discovered Façade, an experiment in virtual storytelling which bills itself as "a one act interactive drama." Essentially, Façade is a game set in a basic 3D environment; however, interaction in Façade occurs mainly through the keyboard, not the 3D environment, with players typing messages in English to the two computer-controlled characters.

As with most games that rely on natural language processing, Façade really doesn't actually do a very good job of understanding the typed messages. The interesting thing about Façade though, is that thanks to the setup (you're stuck between a feuding couple) it doesn't really matter what you think, but your inflections still affect the story. So all in all, it's really well done from an interactive fiction point of view, although it's completely irrelevant to the language processing stuff I was looking into when I found it.