Showing posts with label armchair philosophy. Show all posts
Showing posts with label armchair philosophy. Show all posts

11/4/12

management technology

Here's a caricature of modern development practices: there's a boss who has "people" skills (represented with the filled circle within the circle), and they manage engineers/programmers who have "problem solving" (triangle) and "coding" (diamond) skills.

The dotted line represents the payroll of the company. It is difficult to experiment with things above this line, e.g., it is hard to fire people. It is much easier to experiment with stuff below this line, and this is where "technological progress" occurs.

Here's a new model that I think we can achieve with online labor markets like oDesk. A new sort of "engineer" is created who has some "people" skills and "problem solving" skills, and they contract work to people with "coding" skills. Note that the coders are not what we think of today when we think about programmers at Google or Microsoft -- if you've looked at interview questions for places like this, you'll see that they are not testing coding skills but rather problem solving skills. In comparison to problem solving, coding is relatively easy.

Note that the payroll line moves up. This allows for experimentation with ways of hiring and arranging work beneath this line. This allows for the development of what we might call "management technology" (name credit to Devin Fidler).

evolution

We often think about evolution as organisms surviving, mating, and giving birth to organisms which will hopefully survive, mate, and give birth. However, I think evolution has gone through some meta-evolution, creating new powerful evolutionary tools.

At first, we had the evolution of particles. This is closest to what creationists complain about when they say "how could random chance have created a living organism?" Somehow, it seems like particles really did randomly fit together into some sort of particle that could reproduce itself.

After some time, a meta-evolution occurs in the form of cells and DNA. Probably DNA came before cells, but I don't know how that works. Anyway, cells and DNA can evolve more efficiently than pure randomness by using restricted randomness. That is, when DNA is copied, it is usually copied exactly, but sometimes mistakes are made, and these mistakes -- mutations -- allow the exploration of different sorts of cells. But DNA is structured in such a way where random mutations typically don't screw everything up completely. DNA encodes information in a very inefficient manner, taking up lots of space where it could theoretically compress the information. In fact, lots of information in human DNA isn't even used. This inefficiency is good though. It is the feature that allows changes not to effect too many things. If DNA was encoded using zip compression, then any mutation at all would completely change the entire meaning of the DNA strand.

After some more time, another meta-evolution occurred, in the form of organism. Organism have two parents. This allows organism to explore an even more restricted space, by essentially taking the mid-point between good points in this space, as well as searching randomly a bit with mutations. Cells on the other hand only have one parent, so the only mechanism they have for exploring the space is mutation.

After even more time, another meta-evolution occurred, in the form of brains. Brains divide an organism into hardware and software, where the hardware evolves in a 2-parent organism way, but the software can evolve differently. Brains encode "behaviors", and if we think of behaviors as software-organisms, then they are organisms with potentially many parents. I'm not sure exactly how behaviors are transferred, but I imagine that a creature can observe behaviors of other organisms and adopt those behaviors without needing to mate and give birth to a new creature (at least humans seem to be capable of this with "mirror" neurons). This allows an even more refined way of searching behavior space, by essentially taking weighted mid-points of many parents.

After even more time, another meta-evolution occurred, in the form of imagination. Imagination is the ability of a brain to simulate reality in it's head, without actually doing anything. This allows a brain to test a behavior without suffering too negatively if it is a bad behavior. This increases the turn-around time for exploring behavior space.

Now behaviors, or "ideas", seem to be like organisms themselves, and they are evolving in the ecosystem of brains. That is, the life and lineage of an idea or behavior doesn't necessarily follow blood lines. Hence, we might expect there to be "meta-evolutions" of idea-organisms. And it's possible that there already has been. Some ideas may already have created a sort of "cell and DNA" structure (memes?) so that they can more reliably survice and reproduce, with a more refined mechanism for searching the space.

In fact, I feel like one survival strategy of ideas is to infect a brain, grow, and be born in the from of a human thinking "I just came up with an idea!", where really, the idea had already been come-up-with, and it's just a good strategy for ideas to make their "mothers" think they are original creations so that the mothers will love and care for them, e.g., tell other people about them, so they can survive and reproduce. But the mothers didn't really create them, any more than a human mother cobbled together the DNA of their child.

Hence, I think the idea of "idea ownership" is bunk -- even if it has been a good strategy for ideas to make us feel this way thus far. That is, by understanding better how ideas actually evolve, we may be able to create an even more efficient ecosystem for ideas, in the same way we can develop better agricultural methods by understanding better how plants grow.

many worlds interpretation

By looking at the figure on Wikipedia for the many-worlds interpretation of quantum mechanics, it seems like universes branch off as "observations" are made, where the outcome of the observation is one thing in one branch (e.g. cat dead), and another thing in the other branch (e.g. cat alive).

However, I think the many-worlds interpretation is more like: all possible universes always exist, with different amounts of probability, and these probabilities shift over time. And the way they shift depends on the distribution of probability, which seems to imply that the future of our current universe depends in part on the probability of various parallel universes.

This in turn implies that there isn't really one version of history, but rather, the current state of our universe feeds from a distribution of possibilities for our immediate past. I think this is what is meant by the statement in Wikipedia: "Many-worlds implies that all possible alternative histories ... are real".

10/27/12

reduction

When talking about computer sciency problems -- the sort of problems that people like to call NP-complete and such -- people often say things like "problem A can be reduced to problem B".

The word reduce is confusing to me.

This "reduction" makes it sound like a large problem called A is becoming smaller -- reducing in size -- so it can fit somehow in B.

But that's wrong. B is generally a bigger and harder problem than A.

For example, we might "reduce" the problem of finding the maximum number in a set of numbers to the problem of sorting the numbers. That is, we can sort the numbers, and then we'll get the maximum number for free by looking at the last number in the sorting. But sorting is harder than finding the max, so we didn't really reduce the size of the problem of finding the maximum number when we sorted the numbers -- we did more work than we needed to.

So, I propose that we stop saying "reduce", and saying something like "embed" or "express in terms of".

I can embed the problem of finding the maximum number into the problem of sorting numbers, or I can express the problem of finding the maximum number in terms of the problem of sorting numbers.

10/23/12

arguing

My uncles have lively discussions -- arguments one might call them. I have inherited the trait of enjoying arguing, and I can get pretty emotionally vested in them.

My philosophy of arguing changed a bit when I did debate in High School. Since I was forced to argue both sides of each topic, I grew to feel that both sides of every argument were "right". Of course, the topics tended to be chosen for having good points on both sides.

In any case, I'm not sure whether both sides of an argument are "right", but here's a made-up word-to-the-wise about arguing:

You can only truly win an argument if you can argue in favor of your opponent's side better than they can.

10/16/12

singularity

I had a little conversation with a friend today about the "singularity" -- the name commonly given to the idea of what will happen to humanity as people can be wired together with computers, and computers become powerful enough to simulate human level intelligence.

A couple things occurred to me. First, although I believe that something like the singularity will eventually happen in reality, I also want it to happen, and so it's hard to know if my belief in the singularity is distorted by that desire, in the way some people might accuse religious people of just wanting religion to be true so they don't think life is meaningless.

Second, I have a somewhat unconventional view of my hopes for the singularity. A lot of people who believe in the singularity see it as a way for man to become immortal, which is an appealing way to avoid the modern "scientific" belief that death is the complete end of a human's existence.

I don't have that belief though. I told my friend that I wasn't concerned if I made it to the singularity or not. And he pressed me about why. And it forced me to articulate my reason, which is this: I'm not sure I'll make it to the singularity. I might die first. And I can't prevent the possibility of my death, no matter what I do. So I want a view of the singularity that makes it "ok" if I die tomorrow.

Incidentally, my belief -- that makes it "ok" to die tomorrow -- is something along the lines of believing that the universe is on a course toward greater intelligence, which is somehow weirdly inevitable by the laws of nature -- the strange force that causes life to arrise and tend toward more complex and interesting life. And I am a part of that. I am programmed to act as a sort of neuron in the brain of humanity, and I can't help myself but do that. That is, whether I like it or not, it seems like that's what I do, and I've decided that I like it. It makes me less concerned about what happens to Greg, allowing me to concentrate more on "solving problems", which is what my programming tells me to do anyway (which is a bit circular I realize, since no matter what I do, or what viewpoint I take, I can say that's what I'm programmed to do...). Anyway, I need to think about this more to articulate what exactly I find appealing about it...

9/29/12

tip

If you see a full real-time scaled-down model of our universe, don't smash it.

8/13/12

Logicomix

I finished Logicomix. It's great. I agree with Russell's portrayed "answer" to whether America should join World War II, which is "think about it". I also like Gödel -- which I knew already.

I'm not sure I like the attitude of the writers, oddly enough. I feel myself sympathizing with Christos, the computer scientist they brought in to make sure all their math was right, though I don't completely agree with him either. I feel like Christos thinks that math has been "solved" with computer science. But I don't think it has (though I have held that belief in the past).

But I feel like the writers really think logic is madnesses. There's this notion that we can't just think logically, at which point I feel like the author's are thinking "we also need emotions and a sense of humanity". But that's not what Gödel's proof shows. It shows that logic is incomplete. But it leaves no reason to believe that human thought is somehow more powerful than logic.

8/12/12

connection

Both Watchmen and Logicomix reference Alexander "untying" the Gordian Knot with a sword. The character in Watchmen interprets this "solution" as genuinely innovative. The character in Logicomix takes the solution as not really a solution. I lean in that direction myself. The goal of untying the Gordian Knot is not to unlock some box, in which case opening the box is the real goal, and cutting the knot is as good as untying it. However, in the case of the Gordian Knot, the goal is to untie the knot. The goal is not even to make the knot disappear or cease to exist. The goal is to untie it.

8/11/12

line upon line

Idea 1: I recall this phrase from the Book of Mormon: line upon line, precept upon precept. I forget the original context. My original interpretation of it was that we learn things one bit at a time, like learning to add before learning to multiply.

Idea 2: I often have ideas come to mind, and I think "yes! that's it! that's solves everything! I need to integrate that into my world view!" And then I think about it more, and the idea turns out to be a bit half-baked or not quite as earth-shattering as I thought, and even if it's good, I don't always succeed in remembering it day to day. But, I'll often see good ideas resurface after I've forgotten them.

Combined idea: I thought line upon line was learning to add before learning to multiply, but now I'm thinking that a more useful and accurate interpretation is learning to add many times, again and again, before it truly sinks in. So when I get an idea and think "yes! that's it!" it is good to acknowledge it, but I shouldn't stress about integrating it into my world view right away, because my brain doesn't work that way -- I'll need to see the idea many times as good before it "sinks in" -- and this is not so bad since good ideas tend to resurface anyway.

logic

I'm reading Logicomix, which is fantastic. At some point, it portrays Hilbert saying something like: axioms are not necessarily true, they are just the starting point of the logical process for a proof.

I would go further to say that the logical process itself is just a pile of assumptions. The axioms of logic are assumptions. The rules for logical inference are assumptions. We even assume we've applied the rules correctly when we carry out a proof.

Logic is not so much true as it is useful. The real "grounding" of logic is just that it seems to work often, and where it works often, we trust it more. But it never seems good to say we trust it completely for all problems, and just because it fails sometimes on paradoxes and such doesn't mean that it isn't useful for any problems.

For example, I think of something like modus ponens -- if a is true and a implies b, then b is true -- and wonder, how do we know that? And I don't think we do know it. It's just that, if I build a mental model of a physical system using this rule, then the predictions made by my mental model about real world events turn out to be true, based on my experience. And of course, building a mental model "using" the rule implies some sort of interpretation of the rule, by my brain, which may or may not be the same as the interpretation that other brains give to this rule when they use it to build mental models.

So, I guess I feel less and less sure that math and logic have a life -- an intrinsic truth -- beyond just being problem solving tools, subject to interpretation by brains and computers.

ethics

There is this philosophical question of how to "ground" ethics and morality, without God.

I think the answer is: ethics and morality are grounded in whatever a society happens to think about them, and they can vary from one society to the next. And societies are not always in complete agreement about what is good and evil, making some actions morally grey within that society.

Isn't this circular? I mean, it seems like a society could say "we think X is good because we think X is good, and ethics are grounded in what we happen to think", and how could anyone convince it otherwise?

The answer is: if a society really did think that X was good because it thought X was good, then it really would be good in that society, and it really would be hard to convince that society otherwise.

However, this does not mean that all societies will employ this logic in coming to decisions about what to consider good or evil. Most societies will probably employ logic like: X is good because it promotes well-being, and Y is bad because it causes suffering.

When trying to convince a society that something is good or bad, there is no "logical foundation" one can turn to. I think it's a messy matter of simply convincing lots of people using the techniques of convincing people of things.

So.. do I think people are necessarily evil if they do things that their society thinks are evil? No. I don't think people or actions are evil in and of themselves. Societies can think things are evil, and if they do, then those things are thought of as evil by those societies. So if someone does something that their society thinks is evil, then I think that person will be thought of as evil by their society.

Do I personally think they are evil? I might. If I did, I feel like all that would reveal is that I personally thought they were evil.

free association

It is interesting to me that the brain seems to do useful thought when I let it run free. I've had the experience of going to bed thinking about a math problem, and waking up with the answer, but the answer was arrived at subconsciously. And when I have a difficult problem to think about, I often put my head down, and it seems to other people that I'm sleeping, but I'm pretty sure I'm thinking, even though I'm not consciously thinking, because I will come up with reasonable solutions after this process.

So why do I go through the subjective conscious "effort" of working on problems if my subconscious mind could solve them? I've thought that maybe it has to do with "symbolic processing", e.g., reasoning about things using logic and symbol manipulation. Of course, my brain can do math subconsciously, so that seems like a whole in this theory, but I feel like before my brain can do math subconsciously, I need to build a mental model of the problem. Once a mental model is in place, I feel like my subconscious can "intuit" a solution.. so maybe the conscious mind is necessary to build mental models, and reason about things that are not "intuitive".

Another candidate for what the conscious mind does is adjusting the mental zoom level, and directing what to zoom in on.

In any case, I've tried trying to not control myself, and just letting my subconscious make decisions, and it seems to work. That is, my body does do stuff, all on it's own. It can even talk to people, all on some sort of "auto-pilot".

So.. I'm not sure. Note that the issue of "what does the conscious mind do?" is separate from the question of "free will". Even though I don't believe in free will, I still admit to having the subjective experience of "making decisions". I'm just curious why some of the decisions made by my brain are interpreted subjectively as having been made consciously, and some of them are not interpreted that way.

free will

This is an excerpt from a previous post: ...I don't believe in free will. More specifically, I don't understand what free will is. I feel like brains have a set of desires, and they try to meet those desires, and they might make compromises between conflicting desires, and they might accidentally do something that doesn't meet their desires by mistake, but brains are not capable of deciding to do things that don't meet their desires. I feel like someone reading this might slap themselves in the face and say "Ha! I did something that didn't meet any of my desires." However, I would say to that person, "you wanted to prove me wrong -- that's the desire you were satisfying when you slapped yourself in the face, and you wanted to prove me wrong so badly that you compromised your desire to not hurt yourself."

Anyway, I don't know what it would even look like for an entity to make a "free will" decision that wasn't based on any desires. Why would they have made that decision and not some other decision? If they can't answer that question, I think the decision was random, not free. If they can answer the question, then the real question becomes, did they make a free will choice to have that value system for comparing decisions, rather than some other value system? If so, we ask "why did they chose the value system they chose?", and we apply the argument recursively. If not, then the freedom seems to end there.

I think humans are born with a set of desires that they don't chose, like wanting food and air and love. I think other desires are logically derived from those, e.g., I want to go to school, so I can get a job, so I can earn money, so I can buy food.

Incidentally, I do believe in jail. If someone does something bad, I don't believe in "judging" them in the sense of thinking "shame on you! you evil person!", but if I think they'll do it again, then I'm afraid of it happening to me, then I would like them to be prevented from doing it.

opposite of fear

First, I think fear is an emotion felt about the outcome of future events.

For instance, I might not know whether a girl will say "yes" to going on a date with me, and I might fear that she'll say "no". Once I ask her, and she gives an answer, my fear goes away. Even if she says "no", I'm no longer afraid that she'll say no. It just sucks.

I think the opposite of fear is hope.

I might hope that the girl will say "yes". And once I ask her, and she gives an answer, my hope goes away. Even if she says "yes", I no longer hope that she'll say yes. It just rocks.

I said this to a friend, and he said that the opposite of both fear and hope is indifference. I think this is true. It reminded me of a post I wrote about opposites, where I suggest that opposites often come in threes rather than twos. For instance, black and white are opposites, but the notion of clear or transparent is opposite to both of them.

This notion of indifference lead to my thoughts about mental zoom.

behind the mind's eye

I talk about the mind's eye zooming in and out here. I talk about creating larger and larger infinities here. I feel like the two are related.

The mind's eye sees a "set" of stuff in front of it. Zooming out involves seeing a larger set of stuff, including the spot where our mind's eye used to be.

Of course, since our mind's eye is now someplace new, it also seems like this creates the potential for an even more zoomed back position which would see in front of it our current mind's eye.

Hence, it seems like there's always something behind us that we can't see.

infinity

Here is a game I play sometimes. I try to imagine ways of organizing one's. The game usually goes like this.
  • 1
  • 11
  • 111
  • we see a pattern here, and we can invent a compressed way to write it, like "rep(1)", which means "repeat the number 1 an infinite number of times"
  • rep(1)
  • rep(rep(1))
  • rep(rep(rep(1)))
  • we see another pattern here. we're getting deeper and deeper nested calls of "rep", but the "rep" notation itself is not expressive enough to represent this, so we need a new way to compress this information. Let's represent this as "rec(rep, 1)", i.e., an infinite number of recursive calls to "rep" with a base case of "1".
  • rec(rep, 1)
  • rec(rep, rec(rep, 1))
  • rec(rep, rec(rep, rec(rep, 1)))
  • we see another pattern here of recursively calling repeat on recursively calling repeat again and again.. unfortunately this pattern is not expressible in terms of just "rec" and "rep" as we've defined them. We need something new, again. Let's express the infinite application of "rec" and "rep" to 1 as "rec2(rec, rep, 1)"
  • rec2(rec, rep, 1)
  • rec2(rec, rep, rec2(rec, rep, 1))
  • rec2(rec, rep, rec2(rec, rep, rec2(rec, rep, 1)))
  • we see another pattern here, and we could imagine that the next thing to do is...
  • rec3(rec2, rec, rep, 1)
  • ...and then...
  • rec4(rec3, rec2, rec, rep, 1)
  • which is a pattern in itself, of repeatedly inventing "rec" functions with larger and larger numbers of arguments.. we might collapse this whole business into "superRec(rep, 1)"
  • superRec(rep, 1)
  • superRec(rep, superRec(rep, 1))
  • superRec(rep, superRec(rep, superRec(rep, 1)))
  • rec2(superRec, rep, 1)
  • rec3(rec2, superRec, rep, 1)
  • rec4(rec3, rec2, superRec, rep, 1)
  • now we see a similar pattern to "rec4(rec3, rec2, rec, rep, 1)", except there's a "superRec" where we had "rec".. we could represent the infinite extension of this new pattern as "superRec2(superRec, rep, 1)"
  • superRec2(superRec, rep, 1)
  • superRec2(superRec, rep, superRec2(superRec, rep, 1))
  • superRec2(superRec, rep, superRec2(superRec, rep, superRec2(superRec, rep, 1)))
  • rec3(superRec2, superRec, rep, 1)
  • rec4(rec3, superRec2, superRec, rep, 1)
  • now we see a similar pattern to "rec4(rec3, rec2, superRec, rep, 1)", except there's a "superRec2" where we had "rec2".. we can guess that eventually all of our "recN"'s will be replaced with "superRecN"'s, and we can represent that whole mess as "uberRec(rep, 1)"
  • uberRec(rep, 1)
  • uberRec(rep, uberRec(rep, 1))
  • uberRec(rep, uberRec(rep, uberRec(rep, uberRec(rep, 1))))
  • rec2(uberRec, rep, 1)
  • rec3(rec2, uberRec, rep, 1)
  • rec4(rec3, rec2, uberRec, rep, 1)
  • so we went from "rec4(rec3, rec2, rec, rep, 1)" to "rec4(rec3, rec2, superRec, rep, 1)" to "rec4(rec3, rec2, uberRec, rep, 1)".. we can guess that we're going to need something like "ultraRec" and then "megaRec" and eventually we'll run out of words, so instead of "super" and "uber" and "ultra" and "mega" we'll call them "super1Rec", "super2Rec", "super3Rec", and the infinite extension of them we'll call "superInfiniteRec"
  • superInfiniteRec(rep, 1)
  • superInfiniteRec(rep, superInfiniteRec(rep, 1))
  • superInfiniteRec(rep, superInfiniteRec(rep, superInfiniteRec(rep, superInfiniteRec(rep, 1))))
  • so this seems like it's going to lead us to something like "superInfiniteRec2" and "superInfiniteRec3", which we'll eventually call "uberInfiniteRec" and then "ultraInfiniteRec", and then we'll replace the "uber"'s and "ultra"'s with "super2"'s and "super3"'s and eventually "superInfinite" giving us "superInfiniteInfiniteRec"
  • superInfiniteInfiniteRec(rep, 1)
  • superInfiniteInfiniteInfiniteRec(rep, 1)
  • superInfiniteInfiniteInfiniteInfiniteRec(rep, 1)
  • now there's an obvious pattern here.. repeated use of the word "Infinite" in our function name. However, it is less clear how to represent the infinite extension of it.. we might try something like "super[rep("Infinite")]Rec(rep, 1)"
  • super[rep("Infinite")]Rec(rep, 1)
  • super[rep(rep("Infinite"))]Rec(rep, 1) whatever that means
  • super[rec(rep, "Infinite")]Rec(rep, 1)
  • super[superRec(rep, "Infinite")]Rec(rep, 1)
  • super[super[rep("Infinite")]Rec(rep, "Infinite"))]Rec(rep, 1)
  • super[super[super[rep("Infinite")]Rec(rep, "Infinite"))]Rec(rep, "Infinity")]Rec(rep, 1)
  • um.. now I'm pretty sure this notation still has meaning, and I can see a new pattern, but it seems like expressing this pattern will be even messier. Maybe it would be easier if I had made some different notation choices above..
The interesting thing is that I keep needing to invent new notations to represent new organizational concepts. In fact, I doubt that there exists any "clean" notation that keeps working in some sensical self-consistant manner forever. Weirder still, I think the game can be played, in theory, past the ability to create notations. That is to say, the number of iterations of this game may be uncountable (I'm pretty sure it is, I just haven't proven it myself). I think this is what the Church-Kleene ordinal is talking about.

Now, in the game above, I start with "1", "11", "111", which one could think of as "1", "2", "3". In this case, one could think of "rep(1)" as the number that comes after all the integers, and "rep(1) 1" as the number after that, on and on.. hence, you can use this game as a way of generating numbers to count things with. In mathematics, these numbers are called ordinals, where Cantor normal form is one way of expressing them (where my "rep(1)" would be "ω", and my "rep(1) 1" would be "ω + 1", and my "rep(1) rep(1)" would be "2ω", and my "rec(rep, 1)" as "ω2", or something like that).

Now, there's a notation for talking about levels of infinity, or "cardinals". The countable infinity is called אa0, in Aleph notation, and אa1 represents the uncountable real numbers. You can keep counting אa2, אa3, אa4, and eventually אaω, אaω+1, ... אa, ... אaω2. Notice that the ordinals act as subscripts for the cardinals.

Now how many cardinals are there? Presumably there are uncountably many, since there are uncountably many ordinals for use as subscripts. But we don't mean uncountable in the אasense -- rather we mean not-countable, but probably larger than אa1, or in fact any cardinal.

So in summary, the game of making things bigger can be played "infinitely", but not in the way I would have thought. I would have thought it could be played "infinitely" in the sense that I could keep coming up with something bigger, but in fact, it can be played to the point where I can't come up with something bigger, and yet there still is something bigger -- I just can't come up with it, due to the limitations of notation while living in a אa0 or possibly אa1 sized world.

8/10/12

mental zoom

My mind's eye feels like a microscope. I can zoom in to different levels. For example, I might be looking at a ladybug on a leaf. Here are some different levels:
  • seeing the intricate details of the ladybug, focussed on the small part of my retina that contains the image
  • seeing the plant that the ladybug is on, and hearing birds overhead
  • aware of my body standing on a path in the woods, looking at a ladybug
  • aware of my goals; I know where I am in the woods, and where I hope to end up
  • aware of higher level goals; I know why I chose to come to the woods
  • aware of mental systems, like the fact that my brain has goals, and that it has a variety of systems in place for achieving goals
  • aware of meta-mental systems, like the fact that I seem to have a "mind's eye" that is seeing mental systems, including the mental construct of a "mind's eye"
I "feel" the stuff at the level I'm zoomed in to. If I'm zoomed into the intricate details of the ladybug, then those details seem "real". I'm subjectively experiencing them. As I zoom out, I'm less "in the moment" of experiencing the ladybug, and more in the moment of experiencing some higher level process.

At one level, I might be aware of a pain in my leg, like a bug bite. As I zoom in (to something else) or out, this pain fades.

One tool for zooming out, for me, is "indifference". In order to see something from a higher level, I need to not care about the stuff at the lower level. Caring about it seems to keep my mental focus at that level. At a higher level, my mind can still deal with issues at a lower level, but without getting "wrapped up" in them. For instance, if I zoom out from the level of really feeling the pain of a bug bite, I can still put some ointment on the bug bite, without feeling the pain as much. Similarly, if I am feeling scared or angry, I can zoom out and process the situation more objectively without being "controlled" by the fear or anger. Of course, if I zoom out too much, I become indifferent to how I handle the situation at all, because my mind is focussed on much higher level issues, and I will appear outwardly to just be staring into space.

I think I may be my mind's eye. If feel like I do things, but I'm not sure that I really do them. I think I just feel like I do. I'm not sure what the connection is between subjective awareness (the stuff that seems to come from where the mind's eye is looking) and action (the stuff we might call "free will"). As  I think about this mental model more, the more it feels like I am the mind's eye, but it does still feel like I have one control knob, and that is the zoom level -- and what to zoom into -- but I'm not even sure that the mind's eye itself is even in control of that.

I think meditation can help people zoom back. Meditation seems related to indifference and awareness, which both seem like useful tools for zooming back, in addition to the tool of calming the mind (which is itself a form of indifference).

I think the zoom level right behind our physical body is our imagination, and this can be a tricky level to zoom behind, because our imagination is capable of making it appear as if we are zooming behind it. For instance, in reading this, I might imagine zooming back from my act of reading this, past my imagination, and seeing all kinds of stuff happening in my mind, and everything getting smaller and farther and farther away -- but it's possible that that is all taking place at a static level of zoom at the level of my imagination. That is to say, the things that seem like other stuff going on in my mind may not in fact be going on, but are just fancy images that my imagination conjures up for what it might look like to see stuff going on in my mind. My mind is very tricky this way. I have felt at times that I am actually zoomed back, and actually seeing real stuff that is going on, and this feels subjectively different from merely imagining that I'm zooming back, but the difference is subtle.

I think some drugs can zoom people back (or forward) more than the mind usually does in the corse of a person's normal life. Of course, drugs may also "tilt the mind's mental camera" to look at things a little sideways, seeing things differently, e.g., seeing colors as sounds, or emotions as images. The benefit of drugs may be becoming aware of mental zoom levels that were never achieved before. The danger is that zooming way back, having never zoomed that far back before, and having the camera tilted as well, can be confusing and scary. There is also the risk of taking physical actions based on this possibly skewed and misaligned view of reality, so I would want someone to look after me.

When we are focused at a particular zoom level, it is possible to forget that zooming is possible. Maybe my girlfriend breaks up with me, and I'm very sad and so wrapped up in the sadness that I forget it is possible to zoom back a bit and see the sadness more objectively, and in context with other things going on in my mind.

When I am zoomed back far enough, I can see my mind's processor for predicting future events. This can feel like seeing the future, though I'm sure it is just my mind's best guess about the future.

Each time I manage to zoom farther back, it takes a while to understand what is really going on, sortof like being a baby and trying to learn how to use my limbs.


7/22/12

learning on your own

At my family reunion, I had a discussion with my uncle about learning stuff by one's self. He's a proponent of reading what other people have done. I'm a proponent of reinventing the wheel as much as possible.

I think reading what other people have done is a powerful tool, but it has some drawbacks. First, it's sortof like looking at the answer to a brain teaser -- I now have the answer, but I probably haven't improved my skill at solving brain teasers. Second, it kindof puts my mind in a box so that I'll think about the problem the same way as the people before me, and I may get stuck in the same places they got stuck, whereas if I reinvent the wheel myself, I may arrive at a slightly different wheel that has some advantages over other wheels that I wouldn't have thought of starting from someone else just telling me about wheels.

Anyway, as always, I think there are pros and cons, but I think that people undervalue reinventing the wheel. In fact, I think there is a social stigma against it. I think people perceive it as arrogant -- how can someone think that they'll reinvent a wheel better than the "great" inventors of the past? However, I think this perception is harmful -- how are people supposed to learn to be inventors if they can't practice inventing things that have already been invented?

big words

Big words compress information. However, they are also more difficult to understand -- they are used more rarely, so they are more likely to be forgotten, or may take longer to recall. I feel like the tradeoff rarely favors using a big word if the goal is conveying information.

I feel like this same tradeoff comes up in math. After thinking about a problem for a while, the solution can be compressed into a short equation. However, presenting that equation alone presents a very challenging task to the reader to try to unravel the meaning of the equation (especially if it has lots of greek symbols, and the reader is not greek). I think it is often better to walk the reader through the key ideas that led to the equation.

Naturally there is a tradeoff. Sometimes big words and equations are good.

But I feel like they are often abused in order to make an author look smart, or even to hide a bad explanation by making it difficult for people to understand, and hoping that people won't ask questions for fear of feeling stupid for not understanding it.

I think explanations are like user interfaces. If the explanation fails, it is probably the fault of the person doing the explaining, NOT the person listening.