I told you so

I was analyzing a nightmare, and asking, why did it frighten me? I think the root fear was people telling me "I told you so". For instance, I'm afraid of my mom being right about religion.

Why am I afraid of people telling me "I told you so"? I think because I want to be respected, and free. I fear that if someone tells me I should do X, and I do Y instead, and X turns out to be better, then they'll feel self-justified in their view, and they'll be able to hold it over me the next time they tell me to do something. If I go against them again, I'll lose their respect, if I haven't already; but if I obey them rather than myself, then I won't be free.



I finished Logicomix. It's great. I agree with Russell's portrayed "answer" to whether America should join World War II, which is "think about it". I also like Gödel -- which I knew already.

I'm not sure I like the attitude of the writers, oddly enough. I feel myself sympathizing with Christos, the computer scientist they brought in to make sure all their math was right, though I don't completely agree with him either. I feel like Christos thinks that math has been "solved" with computer science. But I don't think it has (though I have held that belief in the past).

But I feel like the writers really think logic is madnesses. There's this notion that we can't just think logically, at which point I feel like the author's are thinking "we also need emotions and a sense of humanity". But that's not what Gödel's proof shows. It shows that logic is incomplete. But it leaves no reason to believe that human thought is somehow more powerful than logic.



Both Watchmen and Logicomix reference Alexander "untying" the Gordian Knot with a sword. The character in Watchmen interprets this "solution" as genuinely innovative. The character in Logicomix takes the solution as not really a solution. I lean in that direction myself. The goal of untying the Gordian Knot is not to unlock some box, in which case opening the box is the real goal, and cutting the knot is as good as untying it. However, in the case of the Gordian Knot, the goal is to untie the knot. The goal is not even to make the knot disappear or cease to exist. The goal is to untie it.


thinker's resort

I guess I mainly want lots of possible places to walk and think or sit and think or lie down and think, whilst having interesting natural beauty and art to look at, and easy access to food. The closest I've seen is a college campus, though they need to make lying down easier. Couches outside, perhaps?

line upon line

Idea 1: I recall this phrase from the Book of Mormon: line upon line, precept upon precept. I forget the original context. My original interpretation of it was that we learn things one bit at a time, like learning to add before learning to multiply.

Idea 2: I often have ideas come to mind, and I think "yes! that's it! that's solves everything! I need to integrate that into my world view!" And then I think about it more, and the idea turns out to be a bit half-baked or not quite as earth-shattering as I thought, and even if it's good, I don't always succeed in remembering it day to day. But, I'll often see good ideas resurface after I've forgotten them.

Combined idea: I thought line upon line was learning to add before learning to multiply, but now I'm thinking that a more useful and accurate interpretation is learning to add many times, again and again, before it truly sinks in. So when I get an idea and think "yes! that's it!" it is good to acknowledge it, but I shouldn't stress about integrating it into my world view right away, because my brain doesn't work that way -- I'll need to see the idea many times as good before it "sinks in" -- and this is not so bad since good ideas tend to resurface anyway.


I'm reading Logicomix, which is fantastic. At some point, it portrays Hilbert saying something like: axioms are not necessarily true, they are just the starting point of the logical process for a proof.

I would go further to say that the logical process itself is just a pile of assumptions. The axioms of logic are assumptions. The rules for logical inference are assumptions. We even assume we've applied the rules correctly when we carry out a proof.

Logic is not so much true as it is useful. The real "grounding" of logic is just that it seems to work often, and where it works often, we trust it more. But it never seems good to say we trust it completely for all problems, and just because it fails sometimes on paradoxes and such doesn't mean that it isn't useful for any problems.

For example, I think of something like modus ponens -- if a is true and a implies b, then b is true -- and wonder, how do we know that? And I don't think we do know it. It's just that, if I build a mental model of a physical system using this rule, then the predictions made by my mental model about real world events turn out to be true, based on my experience. And of course, building a mental model "using" the rule implies some sort of interpretation of the rule, by my brain, which may or may not be the same as the interpretation that other brains give to this rule when they use it to build mental models.

So, I guess I feel less and less sure that math and logic have a life -- an intrinsic truth -- beyond just being problem solving tools, subject to interpretation by brains and computers.


There is this philosophical question of how to "ground" ethics and morality, without God.

I think the answer is: ethics and morality are grounded in whatever a society happens to think about them, and they can vary from one society to the next. And societies are not always in complete agreement about what is good and evil, making some actions morally grey within that society.

Isn't this circular? I mean, it seems like a society could say "we think X is good because we think X is good, and ethics are grounded in what we happen to think", and how could anyone convince it otherwise?

The answer is: if a society really did think that X was good because it thought X was good, then it really would be good in that society, and it really would be hard to convince that society otherwise.

However, this does not mean that all societies will employ this logic in coming to decisions about what to consider good or evil. Most societies will probably employ logic like: X is good because it promotes well-being, and Y is bad because it causes suffering.

When trying to convince a society that something is good or bad, there is no "logical foundation" one can turn to. I think it's a messy matter of simply convincing lots of people using the techniques of convincing people of things.

So.. do I think people are necessarily evil if they do things that their society thinks are evil? No. I don't think people or actions are evil in and of themselves. Societies can think things are evil, and if they do, then those things are thought of as evil by those societies. So if someone does something that their society thinks is evil, then I think that person will be thought of as evil by their society.

Do I personally think they are evil? I might. If I did, I feel like all that would reveal is that I personally thought they were evil.

free association

It is interesting to me that the brain seems to do useful thought when I let it run free. I've had the experience of going to bed thinking about a math problem, and waking up with the answer, but the answer was arrived at subconsciously. And when I have a difficult problem to think about, I often put my head down, and it seems to other people that I'm sleeping, but I'm pretty sure I'm thinking, even though I'm not consciously thinking, because I will come up with reasonable solutions after this process.

So why do I go through the subjective conscious "effort" of working on problems if my subconscious mind could solve them? I've thought that maybe it has to do with "symbolic processing", e.g., reasoning about things using logic and symbol manipulation. Of course, my brain can do math subconsciously, so that seems like a whole in this theory, but I feel like before my brain can do math subconsciously, I need to build a mental model of the problem. Once a mental model is in place, I feel like my subconscious can "intuit" a solution.. so maybe the conscious mind is necessary to build mental models, and reason about things that are not "intuitive".

Another candidate for what the conscious mind does is adjusting the mental zoom level, and directing what to zoom in on.

In any case, I've tried trying to not control myself, and just letting my subconscious make decisions, and it seems to work. That is, my body does do stuff, all on it's own. It can even talk to people, all on some sort of "auto-pilot".

So.. I'm not sure. Note that the issue of "what does the conscious mind do?" is separate from the question of "free will". Even though I don't believe in free will, I still admit to having the subjective experience of "making decisions". I'm just curious why some of the decisions made by my brain are interpreted subjectively as having been made consciously, and some of them are not interpreted that way.

free will

This is an excerpt from a previous post: ...I don't believe in free will. More specifically, I don't understand what free will is. I feel like brains have a set of desires, and they try to meet those desires, and they might make compromises between conflicting desires, and they might accidentally do something that doesn't meet their desires by mistake, but brains are not capable of deciding to do things that don't meet their desires. I feel like someone reading this might slap themselves in the face and say "Ha! I did something that didn't meet any of my desires." However, I would say to that person, "you wanted to prove me wrong -- that's the desire you were satisfying when you slapped yourself in the face, and you wanted to prove me wrong so badly that you compromised your desire to not hurt yourself."

Anyway, I don't know what it would even look like for an entity to make a "free will" decision that wasn't based on any desires. Why would they have made that decision and not some other decision? If they can't answer that question, I think the decision was random, not free. If they can answer the question, then the real question becomes, did they make a free will choice to have that value system for comparing decisions, rather than some other value system? If so, we ask "why did they chose the value system they chose?", and we apply the argument recursively. If not, then the freedom seems to end there.

I think humans are born with a set of desires that they don't chose, like wanting food and air and love. I think other desires are logically derived from those, e.g., I want to go to school, so I can get a job, so I can earn money, so I can buy food.

Incidentally, I do believe in jail. If someone does something bad, I don't believe in "judging" them in the sense of thinking "shame on you! you evil person!", but if I think they'll do it again, then I'm afraid of it happening to me, then I would like them to be prevented from doing it.

opposite of fear

First, I think fear is an emotion felt about the outcome of future events.

For instance, I might not know whether a girl will say "yes" to going on a date with me, and I might fear that she'll say "no". Once I ask her, and she gives an answer, my fear goes away. Even if she says "no", I'm no longer afraid that she'll say no. It just sucks.

I think the opposite of fear is hope.

I might hope that the girl will say "yes". And once I ask her, and she gives an answer, my hope goes away. Even if she says "yes", I no longer hope that she'll say yes. It just rocks.

I said this to a friend, and he said that the opposite of both fear and hope is indifference. I think this is true. It reminded me of a post I wrote about opposites, where I suggest that opposites often come in threes rather than twos. For instance, black and white are opposites, but the notion of clear or transparent is opposite to both of them.

This notion of indifference lead to my thoughts about mental zoom.

behind the mind's eye

I talk about the mind's eye zooming in and out here. I talk about creating larger and larger infinities here. I feel like the two are related.

The mind's eye sees a "set" of stuff in front of it. Zooming out involves seeing a larger set of stuff, including the spot where our mind's eye used to be.

Of course, since our mind's eye is now someplace new, it also seems like this creates the potential for an even more zoomed back position which would see in front of it our current mind's eye.

Hence, it seems like there's always something behind us that we can't see.


Here is a game I play sometimes. I try to imagine ways of organizing one's. The game usually goes like this.
  • 1
  • 11
  • 111
  • we see a pattern here, and we can invent a compressed way to write it, like "rep(1)", which means "repeat the number 1 an infinite number of times"
  • rep(1)
  • rep(rep(1))
  • rep(rep(rep(1)))
  • we see another pattern here. we're getting deeper and deeper nested calls of "rep", but the "rep" notation itself is not expressive enough to represent this, so we need a new way to compress this information. Let's represent this as "rec(rep, 1)", i.e., an infinite number of recursive calls to "rep" with a base case of "1".
  • rec(rep, 1)
  • rec(rep, rec(rep, 1))
  • rec(rep, rec(rep, rec(rep, 1)))
  • we see another pattern here of recursively calling repeat on recursively calling repeat again and again.. unfortunately this pattern is not expressible in terms of just "rec" and "rep" as we've defined them. We need something new, again. Let's express the infinite application of "rec" and "rep" to 1 as "rec2(rec, rep, 1)"
  • rec2(rec, rep, 1)
  • rec2(rec, rep, rec2(rec, rep, 1))
  • rec2(rec, rep, rec2(rec, rep, rec2(rec, rep, 1)))
  • we see another pattern here, and we could imagine that the next thing to do is...
  • rec3(rec2, rec, rep, 1)
  • ...and then...
  • rec4(rec3, rec2, rec, rep, 1)
  • which is a pattern in itself, of repeatedly inventing "rec" functions with larger and larger numbers of arguments.. we might collapse this whole business into "superRec(rep, 1)"
  • superRec(rep, 1)
  • superRec(rep, superRec(rep, 1))
  • superRec(rep, superRec(rep, superRec(rep, 1)))
  • rec2(superRec, rep, 1)
  • rec3(rec2, superRec, rep, 1)
  • rec4(rec3, rec2, superRec, rep, 1)
  • now we see a similar pattern to "rec4(rec3, rec2, rec, rep, 1)", except there's a "superRec" where we had "rec".. we could represent the infinite extension of this new pattern as "superRec2(superRec, rep, 1)"
  • superRec2(superRec, rep, 1)
  • superRec2(superRec, rep, superRec2(superRec, rep, 1))
  • superRec2(superRec, rep, superRec2(superRec, rep, superRec2(superRec, rep, 1)))
  • rec3(superRec2, superRec, rep, 1)
  • rec4(rec3, superRec2, superRec, rep, 1)
  • now we see a similar pattern to "rec4(rec3, rec2, superRec, rep, 1)", except there's a "superRec2" where we had "rec2".. we can guess that eventually all of our "recN"'s will be replaced with "superRecN"'s, and we can represent that whole mess as "uberRec(rep, 1)"
  • uberRec(rep, 1)
  • uberRec(rep, uberRec(rep, 1))
  • uberRec(rep, uberRec(rep, uberRec(rep, uberRec(rep, 1))))
  • rec2(uberRec, rep, 1)
  • rec3(rec2, uberRec, rep, 1)
  • rec4(rec3, rec2, uberRec, rep, 1)
  • so we went from "rec4(rec3, rec2, rec, rep, 1)" to "rec4(rec3, rec2, superRec, rep, 1)" to "rec4(rec3, rec2, uberRec, rep, 1)".. we can guess that we're going to need something like "ultraRec" and then "megaRec" and eventually we'll run out of words, so instead of "super" and "uber" and "ultra" and "mega" we'll call them "super1Rec", "super2Rec", "super3Rec", and the infinite extension of them we'll call "superInfiniteRec"
  • superInfiniteRec(rep, 1)
  • superInfiniteRec(rep, superInfiniteRec(rep, 1))
  • superInfiniteRec(rep, superInfiniteRec(rep, superInfiniteRec(rep, superInfiniteRec(rep, 1))))
  • so this seems like it's going to lead us to something like "superInfiniteRec2" and "superInfiniteRec3", which we'll eventually call "uberInfiniteRec" and then "ultraInfiniteRec", and then we'll replace the "uber"'s and "ultra"'s with "super2"'s and "super3"'s and eventually "superInfinite" giving us "superInfiniteInfiniteRec"
  • superInfiniteInfiniteRec(rep, 1)
  • superInfiniteInfiniteInfiniteRec(rep, 1)
  • superInfiniteInfiniteInfiniteInfiniteRec(rep, 1)
  • now there's an obvious pattern here.. repeated use of the word "Infinite" in our function name. However, it is less clear how to represent the infinite extension of it.. we might try something like "super[rep("Infinite")]Rec(rep, 1)"
  • super[rep("Infinite")]Rec(rep, 1)
  • super[rep(rep("Infinite"))]Rec(rep, 1) whatever that means
  • super[rec(rep, "Infinite")]Rec(rep, 1)
  • super[superRec(rep, "Infinite")]Rec(rep, 1)
  • super[super[rep("Infinite")]Rec(rep, "Infinite"))]Rec(rep, 1)
  • super[super[super[rep("Infinite")]Rec(rep, "Infinite"))]Rec(rep, "Infinity")]Rec(rep, 1)
  • um.. now I'm pretty sure this notation still has meaning, and I can see a new pattern, but it seems like expressing this pattern will be even messier. Maybe it would be easier if I had made some different notation choices above..
The interesting thing is that I keep needing to invent new notations to represent new organizational concepts. In fact, I doubt that there exists any "clean" notation that keeps working in some sensical self-consistant manner forever. Weirder still, I think the game can be played, in theory, past the ability to create notations. That is to say, the number of iterations of this game may be uncountable (I'm pretty sure it is, I just haven't proven it myself). I think this is what the Church-Kleene ordinal is talking about.

Now, in the game above, I start with "1", "11", "111", which one could think of as "1", "2", "3". In this case, one could think of "rep(1)" as the number that comes after all the integers, and "rep(1) 1" as the number after that, on and on.. hence, you can use this game as a way of generating numbers to count things with. In mathematics, these numbers are called ordinals, where Cantor normal form is one way of expressing them (where my "rep(1)" would be "ω", and my "rep(1) 1" would be "ω + 1", and my "rep(1) rep(1)" would be "2ω", and my "rec(rep, 1)" as "ω2", or something like that).

Now, there's a notation for talking about levels of infinity, or "cardinals". The countable infinity is called אa0, in Aleph notation, and אa1 represents the uncountable real numbers. You can keep counting אa2, אa3, אa4, and eventually אaω, אaω+1, ... אa, ... אaω2. Notice that the ordinals act as subscripts for the cardinals.

Now how many cardinals are there? Presumably there are uncountably many, since there are uncountably many ordinals for use as subscripts. But we don't mean uncountable in the אasense -- rather we mean not-countable, but probably larger than אa1, or in fact any cardinal.

So in summary, the game of making things bigger can be played "infinitely", but not in the way I would have thought. I would have thought it could be played "infinitely" in the sense that I could keep coming up with something bigger, but in fact, it can be played to the point where I can't come up with something bigger, and yet there still is something bigger -- I just can't come up with it, due to the limitations of notation while living in a אa0 or possibly אa1 sized world.


mental zoom

My mind's eye feels like a microscope. I can zoom in to different levels. For example, I might be looking at a ladybug on a leaf. Here are some different levels:
  • seeing the intricate details of the ladybug, focussed on the small part of my retina that contains the image
  • seeing the plant that the ladybug is on, and hearing birds overhead
  • aware of my body standing on a path in the woods, looking at a ladybug
  • aware of my goals; I know where I am in the woods, and where I hope to end up
  • aware of higher level goals; I know why I chose to come to the woods
  • aware of mental systems, like the fact that my brain has goals, and that it has a variety of systems in place for achieving goals
  • aware of meta-mental systems, like the fact that I seem to have a "mind's eye" that is seeing mental systems, including the mental construct of a "mind's eye"
I "feel" the stuff at the level I'm zoomed in to. If I'm zoomed into the intricate details of the ladybug, then those details seem "real". I'm subjectively experiencing them. As I zoom out, I'm less "in the moment" of experiencing the ladybug, and more in the moment of experiencing some higher level process.

At one level, I might be aware of a pain in my leg, like a bug bite. As I zoom in (to something else) or out, this pain fades.

One tool for zooming out, for me, is "indifference". In order to see something from a higher level, I need to not care about the stuff at the lower level. Caring about it seems to keep my mental focus at that level. At a higher level, my mind can still deal with issues at a lower level, but without getting "wrapped up" in them. For instance, if I zoom out from the level of really feeling the pain of a bug bite, I can still put some ointment on the bug bite, without feeling the pain as much. Similarly, if I am feeling scared or angry, I can zoom out and process the situation more objectively without being "controlled" by the fear or anger. Of course, if I zoom out too much, I become indifferent to how I handle the situation at all, because my mind is focussed on much higher level issues, and I will appear outwardly to just be staring into space.

I think I may be my mind's eye. If feel like I do things, but I'm not sure that I really do them. I think I just feel like I do. I'm not sure what the connection is between subjective awareness (the stuff that seems to come from where the mind's eye is looking) and action (the stuff we might call "free will"). As  I think about this mental model more, the more it feels like I am the mind's eye, but it does still feel like I have one control knob, and that is the zoom level -- and what to zoom into -- but I'm not even sure that the mind's eye itself is even in control of that.

I think meditation can help people zoom back. Meditation seems related to indifference and awareness, which both seem like useful tools for zooming back, in addition to the tool of calming the mind (which is itself a form of indifference).

I think the zoom level right behind our physical body is our imagination, and this can be a tricky level to zoom behind, because our imagination is capable of making it appear as if we are zooming behind it. For instance, in reading this, I might imagine zooming back from my act of reading this, past my imagination, and seeing all kinds of stuff happening in my mind, and everything getting smaller and farther and farther away -- but it's possible that that is all taking place at a static level of zoom at the level of my imagination. That is to say, the things that seem like other stuff going on in my mind may not in fact be going on, but are just fancy images that my imagination conjures up for what it might look like to see stuff going on in my mind. My mind is very tricky this way. I have felt at times that I am actually zoomed back, and actually seeing real stuff that is going on, and this feels subjectively different from merely imagining that I'm zooming back, but the difference is subtle.

I think some drugs can zoom people back (or forward) more than the mind usually does in the corse of a person's normal life. Of course, drugs may also "tilt the mind's mental camera" to look at things a little sideways, seeing things differently, e.g., seeing colors as sounds, or emotions as images. The benefit of drugs may be becoming aware of mental zoom levels that were never achieved before. The danger is that zooming way back, having never zoomed that far back before, and having the camera tilted as well, can be confusing and scary. There is also the risk of taking physical actions based on this possibly skewed and misaligned view of reality, so I would want someone to look after me.

When we are focused at a particular zoom level, it is possible to forget that zooming is possible. Maybe my girlfriend breaks up with me, and I'm very sad and so wrapped up in the sadness that I forget it is possible to zoom back a bit and see the sadness more objectively, and in context with other things going on in my mind.

When I am zoomed back far enough, I can see my mind's processor for predicting future events. This can feel like seeing the future, though I'm sure it is just my mind's best guess about the future.

Each time I manage to zoom farther back, it takes a while to understand what is really going on, sortof like being a baby and trying to learn how to use my limbs.

river of thoughts

I sit on the banks of the river of my own thoughts. I see the river of thoughts spilling out into an ocean of thoughts. The river is fast at times. The ocean is deep. I am sitting in pleasant green grass on the bank. I am detached from my thoughts, though I imagine I am somewhere in them. The landscape is shaped by my thoughts, the way land is shaped by rivers. What is the pleasant grass under my mind's eye? What am I sitting on? What is the sky in this place? The metaphor is not perfect.


Pomodoro Hiring

The Pomodoro Hiring project is an attempt at real-time expert hiring. The idea is to standardize the hiring process around 20 minute blocks of time. The hope is to attract the spare time of experts who may be otherwise employed.

There is a splash page here. Over 600 people from Hacker News have signed up for the beta, which is still under development.

Gmail Valet

oDesk Research collaborated with Nicolas Kokkalis at Stanford on a tool called GmailValet for giving people partial access to a Gmail account. We want to see if people can use this tool to hire remote assistants on oDesk to help with email processing. A paper has been submitted to CSCW 2013 (rumor has it that it has been accepted).