Tuesday, December 21, 2010

Teleconference

It is not always easy to understand a large group of people speaking into a laptop microphone at the same time.

Sunday, October 31, 2010

Watermelon Carving

Exciting news everyone! I managed to grab the 'gingerich' username on Flickr (I tried for matt, but 210 people beat me to it... actually probably more than that, but matt211 was still available; also someone has majugi as well... grr).

Also, the reason I signed up for a Flickr account, after all this time, is because I don't want to put the entire series of awesome pictures I took of me carving a brain out of a watermelon on this blog. Here's one, though:

EDIT - Other crazy things I forgot to mention: I froze an egg while walking home from the grocery store today. I had noticed that one of my eggs was cracked, so I decided to fry it before it made a bigger mess but I ended up peeling away half the shell before I could 'pour' the egg into the pan. It is also now snowing (very lightly)!

...and the people I'm living with just filled four wine bottles with homemade apple cider (and I mean homemade in the fullest "pressed from apples grown on a friend's apple tree with an apple press built out of a car press and pieces of wood" sense — and the force required to press the apples was enough to shear the head of one of the screws clean off).

Wednesday, October 27, 2010

Voltage Dividers

Note: I found this post while deleting old drafts I never got around to finishing. I have no idea where I was going with it. I think I had some insightful analogy to make that related a voltage divider to life, music, minds, or some really esoteric projects that I have since forgotten. If you happen to think of what that analogy could have been, please let me know.

The sad thing is, I didn't even get around to explaining voltage dividers.

***

This is a voltage divider.

If the input voltage, V, provides a voltage of 5 volts, and resistors R1 and R2 are equal than Vout will be half of V, or 2.5 volts.

Thursday, October 7, 2010

Context-free grammar (Part 1 of 2)

What are context-free grammars? An abstract definition might say that they're a set of rules allowing for the construction of complex structures consisting of logical parts that can be arbitrarily nested, but I've found that nobody really likes my abstract definitions, so I'll try using an analogy instead.

Consider a freight train hauling three containers (containers A, B, and C):

Let's say that container A can only contain livestock, container B transports liquids, and container C contains manufactured goods. It doesn't matter what type of livestock, liquids, and manufactured goods are transported so long as they fit into their respective categories.

We have just defined a very simple grammar. Within this structure, it is possible to have 'phrases' such as "pigs milk computers" or "cows mercury needles".

A cow-mercury-needle train and pig-milk-computer train collision would be unfortunate.

Both the cow-mercury-needle train and the pig-milk-computer train are examples of the same model of train — an A-B-C train — that happen to be carrying different items in their identical containers. The model of the train can be thought of as one rule (or phrase structure) in the grammar, but it is certainly possible to have other rules.

The analogy gets a bit weird here (yes, weirder than the above image) because context free grammars also allow their containers to contain other trains.

Let's imagine a different model of train, the B-A-Weird train that has a B-container (which must, as before, contain a liquid), an A-container (holding livestock) and a weird container which holds either an A-B-C train or a B-A-Weird train. Now instead of a pig-milk-computer train, it would be possible to have a milk-pig-cow-mercury-needle train where the milk is in compartment B, the pigs are in compartment A and the cow-mercury-needle train from above is all in the weird compartment.

Okay, that's pretty weird, but that's not nearly as strange as being able to put a B-A-Weird train inside the Weird compartment. This could allow us to take the entire train we just discussed and stick it in the last compartment of another B-A-Weird train. Then we could take that train and do the same thing, over and over, as much as we would like. However, at this point the train analogy has lost almost all meaning and is likely confusing matters substantially.

Fortunately, there is a familiar application for context-free grammars that should make more sense now, namely language. By specifying containers in terms of parts of speech instead of cargo, we can construct sentences instead of trains. This is sort of reminiscent of Mad Libs: "the {noun} {past tense verb} the {noun}" would be one example of a sentence structure from a context-free grammar. Amusingly, this is also how Matlab's "why" function manages to produce output such as "The bald and not excessively bald and not excessively smart hamster obeyed a terrified and not excessively terrified hamster."

As before though, things get interesting when sentence structures get put inside of other sentence structures (just as trains... get... er, put inside of other trains). We can expand the definition of nouns to include phrases in the structure "{noun} the {noun} {past tense verb}". This allows phrases such as "man the hamster ate" to be used in place of the simple noun "man". And why not? Anywhere you can refer to a man, you can grammatically refer to a specific man who happened to be eaten by a hamster.

There is a slight problem with this setup in that, as before, things get complicated when structures are continually placed inside other structures. The phrase
"The dog the boy the man the hamster ate raised owned bit the boy the man the hamster ate raised."
is in fact valid using the two rules defined above and can be parsed as:
The (dog the (boy the (man the (hamster) ate) raised) owned) bit the (boy the (man the (hamster) ate) raised).
...meaning the hamster ate the man who raised the boy who owned the dog which bit the boy raised by the man eaten by the hamster.

So... yeah, people don't generally speak like that because of cognitive limits to how many nested levels of information we can keep track of at once. But maybe, you say, this idea of context-free grammar could still be useful for creating different kinds of structure. Some sort of really complex structure that can have parts embedded in other parts in ways that are hard for people to imagine. Perhaps, you hazard, some sort of biological structure.

In this last set of thoughts, you've outlined the general idea behind GenoCAD, a context-free grammar for synthetic biology. It seems to be a good fit at first but, as Andre from the UW iGEM team points out, there are quite a few properties of DNA sequences that it fails to capture. More on this later.

Tuesday, October 5, 2010

All the Way


I saw this rainbow as I was biking home today. The crummy cellphone panorama I took of it doesn't do it justice at all, because it was really quite vivid. In fact, looking closely at it revealed that it was a double rainbow with the second, very faint, rainbow surrounding the main one (slightly easier to see in the following picture - sadly this one's from the same cellphone camera).


Anyway, say what you will about guys sobbing hysterically over rainbows (or sing it, if you're so inclined), but know that they are much, much cooler looking live than in recordings.

Friday, September 24, 2010

Popular Neuroscience

This is a personal blog, it is not necessarily a reliable source of information.

I don't normally talk about the work I do on co-op terms, because I don't want to accidentally offend my employers by telling everyone how horribly nice they all are or making the similarly unfortunate error of posting confidential information. However, as I'm currently working for a neuroscience lab and it is pretty awesome I figure it couldn't hurt to write a series of posts about neuroscience stuff. Now, having said all that, here's a confidential training video:



Okay, so that video wasn't actually confidential, but I did steal the link from our lab forum. Now, Cleese's talk is pretty advanced (though quite entertaining), so I'm going to take a step back with a short Q&A (I'm not one to pass up an opportunity to talk to myself).

Q: What is neuroscience?

A: It's the study of the nervous system, nominally. In the long run, neuroscience, psychology, cognitive science, biology, and computer science are all intricately linked in a crazy quest to understand the way we think, the way intelligence and information processing in general works, and how it all somehow works with physical, biological components.

Q: Is neuroscience really that similar to cognitive science? Isn't it really just biology, but concentrating on the nervous system / brain?

A: I'm not very good with all the research classifications in the area, but it seems like most of what's being published under the label of neuroscience is neurobiology. From an experimental perspective this certainly makes sense: the brain is biological, after all, so obviously a substantial amount of the work being done in reverse-engineering it involves biology. Ultimately though, we're largely interested in the functionality of the brain and the way it gives rise to the interesting, complex interactions we all know and love: calling this functionality 'cognition' and attempting to understand said functionality is more or less what cognitive science is about. There is a big gap in understanding currently between neurobiology and cognitive science which is (to grossly and incorrectly simplify things) what theoretical neuroscience is puzzling over.

Q: tl;dr

A: Grr. Ok, look: biological understanding of brain = improving, but massively, insanely complex already and not capable of explaining brain function. Cognitive science = good at finding ways to do specific things; lacks general theories and can't compete with real brains. Gee, it sure would be nice to have people working on some combination of the above things! Those people would definitely deserve enough funding to pay their co-op students. Big time.

Q: I'm not convinced. What's the point of taking multiple different (and possibly conflicting) approaches?

A: First, which one's 'right'? I dunno. That's pretty much answer enough: they both provide more information, thus they're both valid approaches. If they happen to conflict, even better: there would then be an opportunity to investigate and fix problems that are found with the theories. That's pretty much how science works.

Q: Fine, fine. You can start at either the biology or psychological/cognitive levels, I get that. How could you possibly work from the midpoint between the two levels outwards?

A: This conversation makes more sense when it's less abstract. Instead of talking in terms of a 'midpoint' between two different research areas, I should really have framed this as a need to go beyond current empirical biological data to build more functionally complex models. Instead of making hypotheses based solely on raw data, you can start making predictive models constrained by assumptions, information processing requirements, simplifications required for tractability, and so on...

Q: Sorry to interupt, but isn't this post way too long already?

A: Yes.

Wednesday, September 15, 2010

Hand-Eye Impairment


I had a conversation about piano playing with one of the other guys in my house recently. I had been practicing on my keyboard and he admitted that he used to take lessons himself. As most people do, he downplayed his current ability, but I found it interesting that he did this by saying that he used to always memorize pieces so that he could look at his hands (and therefore never really picked up sight-reading).

I've always hated memorizing music and am actually quite dependent on having a piece of paper in front of me. This is frustrating when I'm near someone else's piano (or my own, without music) and someone asks me to play something. Most piano players would just start playing some song they learned when they were ten, but I really don't remember what I've played before. Perhaps more importantly, having to read the music off a page all the time means I have to spend a lot of effort concentrating on reading that could be better used on technique, or artistry, or listening to what I'm playing instead of just playing it.

So with the goal of memorizing some pieces and with the earlier conversation in mind, I tried playing some pieces while looking at my hands instead of the page. It turns out, I really can't do it. In fact, I am far less capable of playing the piano while looking at my hands then I am when I look away entirely or close my eyes. It seems that looking at my hands move prevents me from being able to use them normally, like my brain's not able to cope with the strange new visual feedback that comes from actually watching what I'm doing.

I find this interesting, as it took many long years of conditioning not to look at my hands while playing and it would appear that that conditioning goes pretty deep. Now to go and work on undoing that work...

Sunday, September 12, 2010

Videos, by JoVE!

This post is nothing more than a link to the journal of visualized experiments, because there's really no reason you should be reading my ramblings when you could be watching science.

It's like a Discovery Channel show, except far more specialized, technical, and current.

Also, if you watch the first few seconds of a clip and then are frustrated when it asks for a subscription, you are probably not using the University of Waterloo's network at the moment. If that is the case but you are still a Waterloo student/faculty, do the following: go to lib.uwaterloo.ca and click the connect from home link (or just click the connect from home link in this post, that will work as well), then login and go to http://www.jove.com.proxy.lib.uwaterloo.ca. If that doesn't work, then just do a search for the journal of visualized experiments after logging into the proxy. This is fully worthwhile, really.

Thursday, September 9, 2010

Burning Lips and Soaking Music

The engineering jazz band (With Respect to Time) gig went totally alright tonight. How alright? This alright:
FIRE! RAIN! GLubhuGLabuhGlublargala!

Now you might be thinking: it seems to me that could have gone a little more alright. And you would be partially right! Because, yeah, you know, it wasn't completely perfect and we will probably have to print off some new music (including a copy of that trumpet solo I lost before the set... the panicked improvised version was a more legit jazz solo anyway) but it was kind of fun in an insane Oh God There's Frosh Everywhere And Maybe This One Won't Mind If I Shove Him Out Of My Way With A Trumpet Case kind of way.

I was actually pleasantly surprised by how not outright painful this performance was. I mean that in a completely literal sense: trumpet playing is really quite physical, but it uses muscles (lip muscles) that are hardly ever used for anything else. What that means is that if you haven't practiced for a while, playing a long gig can feel an awful lot like trying to run a marathon without training. As of this Tuesday's rehearsal I was in no shape to run a metaphorical trumpet marathon, but somehow it worked out tonight (thanks are owed to the other two trumpet players who showed up today for pulling us through).

I could go on (so much craziness in one evening!) but tomorrow is still a work day, so I should probably get to the sleeping.

Saturday, September 4, 2010

Parachuter

Just a doodle because I don't have much to do (aside from work-related things and, as it's a Friday late evening — or very early on Saturday, take your pick — those things can wait). This isn't a "journal doodle" or anything like that... although if anyone wants to jump out of some planes it could become one.

EDIT: After posting the first image, I realized I made some significant technical errors in my depiction of the parachutist's neurophysiological response to the fall. Here's a correction.

Wednesday, September 1, 2010

How Recruitment Should Work

"We're kidnapping you for your own good. Promise. Write the code and you'll get flatbread."

Yup, the fall term's about to start. It's probably about time to start working on that fancy recruitment campaign for iGEM that we keep dreaming about... but don't worry non-iGEM people who read this blog (hi family!), I'm planning on writing about something else here.

And that something else is...

Things That I'm Going to Rant About to New Students

I don't really see myself as the ranting type in general, but there's something about getting a fresh, naive, relatively uncynical, and slightly puzzled batch of new students that brings out the old curmudgeon in me. So, in list form, the things I'm going to be giving today's generation of youth an earful about are:
  1. The definition of engineering. It perplexes me how many people get to third year engineering and then act surprised when they find out that engineering is the profession of applying scientific and mathematical principles to practical problems. Even in systems design, which is basically the study of distilled engineering ideology abstracted away from specific domains, there are still people who think it's all about hard hats and accounting. Yes, that's a part, but... well... if that's your image of engineering, you are a personal pet peeve of mine.
  2. Time hoarding. There's time management and then there's being a wuss. I know you've heard the failure rate statistics. I'm sure you've been told it's wise to cut back on extra-curricular activities until you get used to the workload. That's garbage. The easiest way not to fail out is to keep trying to maintain your stupidly over-acheiving lifestyle that got you into university in the first place. If you lower your standards down to "just trying to pass" in first year, you'll never recover. This is a somewhat controversial point and your mileage may vary; nonetheless, you don't have much to lose in first year. Now taking on extra work halfway through second year is a different story.
  3. Ability to write. I know, I know. Given the poor standards of this blog, this point's a bit hypocritical, but I stand by it. My standards are not high: just catch the blatant typos and missed articles and use apostrophes more or less where they should be used. You can throw in semicolons, commas, and colons wherever you want. Deal?
  4. Stop whining. You only get to make lists like this when you're at least half done your degree and even then only once a term. Grad students are allowed unlimited whining, but they've earned it.

Thursday, August 12, 2010

Alternate Reality Campus


I'm not sure what prompted this, but I've had this idea of a "University of Igloo" floating around in my head for a while. I think I originally thought of it as a satire of the University of Waterloo's big rebranding efforts, but I didn't get around to developing it into an actual parody. So here's some guy chilling next to an igloo in the summer.

Post-script: It pains to me admit this, but I originally accidentally capitalized igloo as 'iGloo'.

Friday, July 9, 2010

Posters!

According to some posters I saw on the wall, the end of term concert for the University of Waterloo's acappella groups is July 23rd at 8:00 PM in the Theatre of the Arts. I don't know if I'll have time to go, but it should be fun times!

EDIT: Oh, hey, also: it turns out that July 23rd is also the day of the Engineering Jazz Band Charity Gig! Yayyyyy. The jazz band will be playing sometime around 6 PM (I think, details coming) at Waterloo Town Square. As a member of the band, I will probably be there. See both the jazz band and a cappella concerts and get your fill of music for the whole year in one day!

EDIT 2: Still looking for things to do on July 23rd? You could catch a webinar about using Matlab to process large datasets around 2 PM before heading to the afternoon/evening concerts. I know I'll be watching... or, probably not actually... but someone might find it interesting.

Wednesday, June 23, 2010

Public lectures and I found my stylus!

Yessssssssssss. That is the sound of me rediscovering my tablet computer's stylus. You don't realize how expensive a piece of plastic can be until you start thinking you'll have to replace one. Fortunately, I don't have to replace one, so here's a celebratory doodle:

Click for larger sketchy drawing!

The iGEM team (next post, I'll stop writing about iGEM in the next post) hosted a public lecture today about synthetic biology. There were a ton of people there which was pretty awesome and various misconceptions about the recent Venter Institute announcement were addressed. I did get the feeling the panel was preaching to the choir a little bit about some issues, but with the number of people there, I'm sure the topic was new to a sizeable group.

In other news, I appear to have pulled a gluteal muscle for no apparent reason. This is not so fun. I would go so far as to recommend against pulling such muscles, in fact!

Friday, June 4, 2010

Let's All Learn About iGEM Software! Yay!

This is one of those posts I wrote for my own benefit, not some hypothetical readership. This will likely be apparent. You have been warned.

I was originally planning on writing about something else and including my standard doodle with this post, but my Google search for reference images seriously grossed me out and I don't have time for that. Instead, here's a post about iGEM software which will force me to do some useful research instead of looking up eye-rending images on Google. Yay!

If you don't know me and are new to this blog here are some explanatory links about iGEM. With that out of the way, what's up with this year's Waterloo iGEM software project? Well, we're working with a BioCompiler concept that roughly translates to: take code, apply genius, extract BioBrick assembly instructions. A compiler, in a computational context, takes code and translates it into equivalent code in a much more primitive language and what we intend to do is an exact analogy of that process for biological systems.

We haven't gotten very far with this design yet, but already there seems to be a canonical example of the goal in the form of the pseudocode "if the concentration of (something) exceeds (value) and the concentration of (something else) is less than (another value) then produce [something]". I wonder how much complexity would be involved in simply translating specifications that meet the above template into assembly instructions; intuitively, I don't feel it should be too excessive, but there is already some possibility for conflicts between the signalling pathway of "something" versus "something else" and this system would already probably be pretty tricky to construct. Following this vague syntax, our working design for the main 2010 project would fit the template "if (something) is present, produce (something else); if the concentration of (something else) is above (constant), produce (yet another thing)". That's an almost trivial piece of code, but it's already pushing the limits of what we can model and what we can construct. Whether this a ringing endorsement of the BioCompiler concept or an outright condemnation, I'm not sure.

The BioCompiler concept is both really exciting and worrisome at the same time. Worrisome, because in order to pursue the BioCompiler idea we'll be discarding work done by the software team of the previous term which sets a bad precedent for continuity. Moreover, this project will definitely not be done before the end of the term, so we'll need to depend on the software team from the opposing stream to continue it. On top of the "sorry I killed you project" factor, it also bugs me that this isn't a project I could conceivably sit down and write myself given a couple weeks. I mean, I realize that it's a good thing that we have an ambitious project that will involve more than myself, and I never had any intention of literally writing the whole software project by myself, but at the same time I like being able to deliver tangible results and this project has no guarantees of delivering those results any time soon. Nonetheless, optimism run high.

Okay. That was all a long diversion because the point of this post was for me to force myself to read and talk about UC Berkeley's 2009 software project. Their project had four main components, all of which are helpfully embodied by silly characters in their documentation. Eugene, the "red-headed stepchild and language", is a formal way of describing BioBrick components. His glasses, Spectacles, are (represents? is? this is where the anthropomorphization of the software modules becomes frustrating linguistically) a tool for visualizing Eugene's data. Kepler is a "Wise Astronomer and Workflow Wizard" which is to say that it's the part of the project concerned with the assembly of parts and it/he actually guides a robot through the assembly process. That was bold and italic because it's that mindblowing. Finally, Clotho (AKA the Hot Green Chick) is a "Greek Goddess and Software Tool". Say what you will about Berkeley's documentation, at least it's memorable (and actually the rest of their documentation is decent as well, including design notes from team members and demo videos of some of the software products).

Anyway, Clotho, yeah. Clotho is an attempt to span the design hierarchy between the parts-level design that Kepler, Eugene, and Spectacles work at and the systems-level and device-level design that engineers like to think at. Actually, upon re-reading the Berkeley page, it appears they believe Kepler, Eugene, and Spectacles operate at the device-level. Huh. It also appears that despite lofty aims at hierarchy-spanning behaviour from Clotho, that module is a simple data management system. It might help you organize your thoughts at varying levels of abstraction, but it certainly won't take ideas written at one level and push them to another. That, I guess, is where we come in.

Alright folks, I'm falling asleep so this is as far as we come today. If you've read this far, I... well I don't know what to think. I'm assuming you haven't read this far. Maybe you cheated and skimmed to the end. In any case, thanks for bearing through it and hopefully everyone learned a valuable lesson about iGEM software. Yay!

Saturday, May 22, 2010

What's Wrong With Wave?

Wave fail.

When Google Wave first came out there was a ton of excitement surrounding it. I was never fully persuaded by the hour-long introduction video, but I was still an early adopter (in the limited pre-beta beta release, or whatever Google called it) and a relatively enthusiastic user. I've tried playing Sudoku on Wave. I've tried having various group meetings on Wave. Some of these experiences were moderately successful, some of them led to failed projects (goodbye ISC 2010), and ultimately Wave seems to have failed to catch on.

Now, granted, it's still in a beta version, but I don't want to dwell on Wave's prospects. I'm simply surprised that its talented development team (former members of the Google Maps team) were responsible for its confused interface design, identity issues (the hour-long intro should have been a tip-off), and (improving, but still bad) performance and reliability.

I'm not sure what lessons to draw from the struggle of the Wave team. That even the best developers in the world still fail sometimes? That earlier validation of designs is necessary? That addressing problems in existing solutions is a disastrous method for innovation if you don't consider the new problems you're introducing? I could probably go on for a while but the thing is, as easy as it is to identify problems with hindsight, I don't know (and can't know) if I would have caught these things had I actually been involved in the design of Wave.

Companies are often happy to boast about their successful design strategies and crazy new innovation paradigms, but there is an obvious selection bias at play in the reporting of these strategies. Who's going to give a TED talk on "Industry-Standard-Centric Design"? Or "Useless But Totally Cool Sounding Paradigms"? Actually, that second one might be a real TED talk, but generally speaking, despite a trizillion books about it, there are not many trustworthy resources about software project management.

EDIT: This just in, other people are writing about interestingly related material! Specifically, lessons learned from failed software products.

Tuesday, May 18, 2010

GUID Socks: Closer to Reality

Waaaaaaay back in the day, I wrote about creating pairs socks tagged with globally unique identifiers (GUIDS) that would allow you to match up pairs of socks for neat, orderly folding. I remember being pretty enamored by the idea the time and started working on some spiffy second-generation designs where the GUID was translated into visually distinct patterns. I even had the beginnings of a promotional website made, to sell the world on the wonders of cryptographically secure socks.

Alas, that idea has yet to fully come to fruition, but as I was deleting some spam comments from older entries of this blog, I stumbled on this gem of a comment from someone going by the moniker 'Jon':

Hi, I got the very same idea and decided to google it and ended up here.

The question here is how to automatically feed the sock machine with new data for every pair. I need to check with a manufacturer if its possible. :)

Sometimes, the internet is pretty awesome.

Recruitment Meeting

This picture is in no way related to the content of this blog post.

UW iGEM had its recruitment meeting today. It was fun times! But I really need work on my presenting abilities; I find I'm really hit-and-miss in my ability to actually convey information while speaking to groups of people. This doesn't come as a surprise, I was always consistantly bad at "verbal communication" as the elementary school report cards would call it, at least by the crazy perfectionist standards of my youth. Nonetheless, I like to avoid being "that guy that gave that talk that nobody understood" whenever possible.

I find it interesting that the fields I'm working in now, both at iGEM and in co-op terms tend to be the interdiscplinary/academic/cutting-edge-no-one-really-knows-what's-going-on fields that are prone to having miserably unintelligible talks. I've sat through a lot of these talks now and they're fairly painful, so painful at times that it's tempting to simply say that the presenter stood no chance and that the topic was simply too complicated to communicate in the available time. Of course, then I go off and watch some TED talks and realize that you can describe pretty much anything worth describing in twenty minutes if you're just that good.

Postscript: I was just about to post this and was looking through what tags to apply, when I realized I have an existing "Painfully Bad Poetry" tag. We were discussing painfully bad iGEM poetry for some reason or other at today's meeting and, well, I have the tag, so...

iGEM (a haiku)
the title counts as a line
oh shoot, out of room

Okay, this next one's the real haiku, not that that previous one wasn't (also, non-iGEM people, iGEM is pronounced "I gem", which is ironic given its silly capitalization which I've probably ranted about before).

lots of little cells
and biology stuff, woot
will sell soul for sleep

Sunday, May 9, 2010

Change is Painful

(click for larger image)

As you may have noticed (or heard me complain about on Twitter), Google recently added a sidebar to the left of their search results. Now, I know some other search engines (named Bing) already have such a sidebar and, in fact, it's not a look that I have any innate hatred for. I generally appreciate designers' attempts to improve software interface, even in such controversial cases as the Microsoft Ribbon or one of the many Facebook Updates.

But I draw the line here.

Google has always had a good interface. A little textbox, search button, and a list of results. It was genius! Even one of those drinking bird toys could probably figure out how to use it. The new sidebar doesn't actually take away any functionality, but at the same time it doesn't do a whole lot. We're helpfully informed that we're searching through "everything" and have the option of narrowing that down to videos, or clicking the little expandy arrow thing to see a big ol' list of things including "updates" and "shopping" to search through. There's also another little expandy arrow thing below that allows you to see options like "wonder wheel", "social", pick a time period to search through, or display fewer or more 'shopping' results.

All well and good, but there already was a toolbar at the top of the page that allowed you to refine your search and the advanced search box provided access to the other fancy Google options. I can see why Google might have wanted to expose more people to the more advanced search options, but well... years of using Google have trained me to look at the spot right where the sidebar is for my most relevant search results and now I am left staring at a cheery 'Everything!' notice instead of useful information. My eyeballs do not want to be retrained, Google. If they did, I would have switched over to Bing when it came out.

It's a bit disturbing, really, how much this slight change frustrates me (to the "I have a headache from looking at this page" degree). It emphasizes how much time I've really spent on Google and how easy it is for little icons, properly placed, to completely distract me from the information I actually need. Some would say this is a wake up call. I think I'll just hang on to the old version of the site, while it's still around.

Sunday, April 18, 2010

Bonsai Trees


My dad's been toying with a little bonsai tree for a while. His bears no resemblance to the above sketch; in fact, it's actually a lot cooler looking because it bends down to be parallel to the ground and lines up with its oval pot. It's not fully styled though, which means that it still basically looks like a potted plant instead of a potted tree. We spent some time today trying to figure out what parts to clip off to make it look more tree-like, but it turns out to be pretty difficult to visualize what effect the clipping will have until after its done.

Edit: As a side note, the sketch accompanying this post is the latest in a long line of proofs for my "drawing foliage is hard" conjecture. I tried to draw this one really quickly because (a) as usual, I want to get to sleep and (b) I'd like to be able to use foliage as a subordinate element in visualization sketches.

Edit 2: Bonsai tree, version 2:

Wednesday, April 7, 2010

Philosophy of Mind — Part 1: Qualia

I really wish I could say that what you are about to read is a well-thought out dissertation on such deep and intellectually stimulating topics as human consciousness, the biological basis for memory, and neural representations of meaning, but as you can probably tell from the sketch above, this post is more of a lark than a serious attempt to explore philosophies of the mind. Still, it's a topic I find pretty interesting so hopefully this will degenerate into a more worthwhile stream of consciousness post than my typical rant.

Disclaimer: I might at times make claims in this post that make me seem like I have a clue what philosophy of the mind is about. This is the internet, however, and I don't cite any sources. Reader beware.

Qualia are probably as good a starting point as any other. The word 'qualia' (pronounced kwalia, presumably after quality) is a term that describes — and you may want to brace yourself against the forthcoming hand waving — the subjective essence of an experience, the quality of a conscious sensation, the purplishness of purple being one example (although the "redness of red" is a more common example). At first, qualia don't seem to be overly interesting: it's hardly surprising that I can perceive redness or purplishness, or any of the other sensations I'm equipped to perceive, but the idea behind qualia is not simply that I notice when objects are purple but that this perception evokes a unique sensation that I can only appreciate due to my consciousness (and that a video camera would therefore not experience upon seeing the same purple).

You can almost feel that purplishness oozing into your mind, can't you?

If you're anything like me, qualia still probably seems like a somewhat esoteric and possibly useless concept at this point. It's loosely defined, not exactly a testable quantity, and seems to succeed mainly at evoking thoughts of sensations instead of helping to understand the process behind these sensations. So why have I spent so much time talking about qualia? Because the idea that every experience has this property, qualia, that is so familiar to us and yet indescribable is both extremely pervasive and influential. Dualism, the belief that thinking, intuition, sensation, and logic are made of a mind-substance that is completely different from physical matter is one attempt to deal with preconceptions about qualia. Dualism is generally discredited these days, thanks to science's proclivity for materialist theories, but I'm probably getting ahead of myself. The point is that consciousness is such a complex process that we have a catch-all term for the mindbogglingly indescribably properties of everything.

It's now getting to the point where I'm experience the sensation of sleepiness in an intrinsically indescribable way, so I'm going to have to wrap this up abruptly. Clearly, I haven't done this topic justice, but if you're moderately interested then Eliasmith and Hofstadter (among many other) are interesting guys to check out. Also, this page about qualia inversions (à la inverted spectrum) is a better introduction to the way the term qualia is actually used by philosophers.

Oookay. Sleepy times.

Saturday, March 6, 2010

Books & Puzzles & No Time

I went on a bit of a book-downloading spree recently, having just discovered manybooks.net. That site is amazing because not only does it organize books from both the Project Gutenberg and Creative Commons collections, it also provides reviews for some of them and lets you download in a ton of e-reader formats (including Amazon's propietary azw format). I've yet to actually buy a book from Amazon, but I've already loaded over thirty documents onto my Kindle including some pretty large files (Ulysses, anyone?) which I will probably never get around to reading.

It's pretty fantastic knowing that one device (and an eletrical outlet) could contain enough material to keep me entertained for several thousand hours, but the problem with this of course is that I don't actually have several thousand hours free to spend reading. It actually makes me nostalgic for those times I've had to spend hours sitting on a bus from Ottawa to Waterloo or taking a super-delayed flight somewhere: think of all that time wasted without a magic pocket library thing!

(Incidentally, if you're planning on getting an e-reader, I don't necessarily endorse Amazon. Their file support is basically limited to their propietary format and semi-crippled PDF support which means that they don't support formats that work with library DRM. On the other hand, the e-ink screen is great, so if you're comparing it to an iPad... well, I'm not sure why you would ever get an iPad. I'd look into the Asus, Sony, and Amazon readers first.)

Sunday, February 28, 2010

Jump Little Leptons, Jump!

I've noticed that there have been some ads for "Quantum Jumping" running on this blog. I would like to note that I do not endorse these ads and that they are, in fact, utter baloney.

Sunday, February 21, 2010

Vancouver 2010


So the Canadian men's hockey team just lost to the Americans. Boo. It was a good game, though. How does a team get outshot 45 to 23 and still win? Well done, Ryan Miller, well done.

Also, for some reason all my sketched figures are really, really lanky. This is not a healthy body image kids! Only 2D people can be that thin.

EDIT: This is probably not news to anyone in Canada, but just for the record, Canada beat the USA 3-2 in overtime in the men's hockey final to win gold. Yayyyyyyy!

Sunday, February 14, 2010

Pseudo-Arts

Reading over my last post, I can't help but notice that it is full of, for lack of a better word, 'Pseudo-Arts'. That is, misguided arts-jargon babble that's probably as representative of real art jargon as pseudo-science is representative of science. I seem to really like pseudo-art, as I fill posts with it whenever possible. Perhaps it's simply a good filler. Perhaps it's something more. At this point, I am inclined to suggest that it is representative of a primal human drive to mimic other societal groups, a concept explored briefly in the literature (Monroe, 451), but that's probably just pseudo-anthropology.

Postscript: the real explanation for the graphic? Skewers + hearts = fun times. It's basic pseudo-math.

Saturday, February 13, 2010

Hearty Holiday (Har Har?)


The image, in case you can't tell, is of a symbolic toast involving two martini glasses, one of which is topped with an olive and the other with a heart. What this symbolizes is beyond me, but connaisseurs of post-modern impressionistic martinis might note the striking similarity in colour and tone of the heart and olive pit. The pit, one could argue, is the heart of the olive and thus the skewer passing through the heart is made poignant by the fact that, in the olive, the pit is not subject to such violations as the skewer would tend to pass through the outer flesh of the olive instead. Removed from its olive, the martinis seem to say, the heart is no better than the olive itself. Incidentally, yes, I am single. Jubilations!

In other news, this Monday is a holiday (Family day) for those of you in Ontario who are not employed by the federal government. University students may even be so greedy as to have an additional week-long holiday in the form of reading week. Of course, this is not really a holiday as you should all be studying. In solidarity with everyone who is actually studying this week, I will admit that I also do not have a holiday Monday. Jubilations!

Thursday, February 4, 2010

Run! It's the Infocalypse!

I read Neal Stephenson's Snow Crash recently, as those of you familiar with that book might infer from the title of this post. It's a book about an information virus; that is, a virus spread by the communication of information (specifically, in Snow Crash, the virus is spread as "binary information" in the form of random white noise in a bitmap, which is rubbish, but let's move on). The "information/mental virus" theme seems to be getting used more these days, with the recent horror film Pontypool being another example.

Augh! It's a mind virus!

I guess this trend shouldn't really be all that surprising considering that we now have real information viruses in the form of computer viruses. Still, thanks to the phenomenal complexity of brains and the wonders of biodiversity, literal mind viruses are going to be confined to science fiction for a while (unless you count religion, catchy songs, or any other cultural meme).

That was all a long digression. What I intended to say with this post is that information, when you think about it, is a really strange and ephemeral thing. We try and think about it in material terms, even going so far as to have crazy ownership laws for it and equally crazy progressive licensing schemes to make sense of those laws, but information really is in a class of its own.

It's a very disconcerting (and frustrating) experience to lose information, be it from a hard drive failure, forgetting a password, or simply deleting it. It doesn't seem like it should be possible to lose information that we've already acquired — what's seen cannot be unseen and all that jazz — except that it absolutely is possible. You may use a password every day, but if you ever forget it it's gone and you'd better hope there's a backup system or your data wasn't important.

This is all basically a long lead-in to linking to this article on the problem of archiving computer records. I find this interesting because, theoretically, computers offer almost unlimited information storage capabilities and they also allow for those records to be backed up and transferred with complete fidelity. Despite this, our information is less permanent than ever and requires constant maintenance to keep around, like that password you must use every day to remember.

Friday, January 15, 2010

Webpages 'n' Competitions 'n' Stuff

If I haven't been posting a lot lately it's because I've been giving the Waterloo iGEM team wiki a shiny new theme, mulling over the Intelligent Systems Challenge, and going to my actual job.

I don't have much else to report on, although the iGEM forums linked to a really cool story about oil drops which I'm shamelessly reposting here. This experiment is right up there with Millikan's famed oil drop experiment in terms of scientific oil experiments that utterly blow my mind.

Friday, January 1, 2010