Why I am not a rationalist; Or, Integral Post-Metaphysics; And Naturalism; And the Myth of the Given; And Phenomenology; And Worldspaces; And Consent

[New? Start here: https://meditationstuff.wordpress.com/articles/]

OMG inferential distance.

The quote below is taken from David Chapman. I’m not implying he endorses Integral Post-Metaphysics. I just like the quote:

For Bayesian methods to even apply, you have to have already defined the space of possible evidence-events and possible hypotheses and (in a decision theoretic framework) possible actions. The universe doesn’t come pre-parsed with those. Choosing the vocabulary in which to formulate evidence, hypotheses, and actions is most of the work of understanding something. Bayesianism gives you no help with that. Thus, I expect it predisposes you take someone else’s wrong vocabulary as given.



The quote above still stands, even taking into account the ideas in Einstein’s Arrogance, which is one of my most favorite posts on Less Wrong:


(Yes, machine learning, yes, self-organizing maps, yes, automated feature extraction. Yes AIXI and Godel machines. Yes, Building Phenomenological Bridges. Still.)

(Something something computability theory and consciousness and naturalism.)

A favorite Less Wrong comment:

[…Y]ou have to enter into the formalism while retaining awareness of the ontological context it supposedly represents: you have to reach the heart of the conceptual labyrinth where the reifier of abstractions is located, and then lead them out, so they can see directly again the roots in reality of their favorite constructs, and thereby also see the aspects of reality that aren’t represented in the formalism, but which are just as real as those which are.


The remainder of the quotes below are from one of the appendices in Ken Wilber’s Integral Spirituality (IS). IS was supposedly written in about two weeks, so the appendix could have been written in hours. It’s rushed and filled with jargon and some of the “equations” and figures are missing. The inferential distance between it and most readers on this blog will be large. I hope you’ll wade through it, anyway. If you understand it, you understand a big chunk of my personal inner operating system. I have additional comments after the quotes.

Before entering the Wilber quotes, I would summarize my position thusly: “If you think, say, ‘Santa Claus’ is meaningless, then you still have work to do on your signifiers, signifieds, and referents!”



(Some of the quotes below are a little bit out of order to increase pick-and-choose coherence.)

(There is a lot of outdated jargon going on down there, and the pdf assumes you read the entire book, so a lot of what you need isn’t there in the actual document. It’s still worth a shot.)


[…] If we claim that our epistemologies are basically representational maps (or mirrors of nature), then just as we of today will invalidate what was taken as knowledge 1,000 years ago, so tomorrow will invalidate our knowledge of today. So nobody ever has any truth, just various degrees of falsehood.

[…] Let’s take four referents, indicated by the signifiers dog, Santa Claus, the square root of a negative one, and Emptiness.

Where do the referents of those signifiers exist? Or, if they exist, where can they be found? Does Santa Claus exist; if so, where? Does the square root of a negative one exist; if so, where can it be found? And so on….


The point is that by doing a type of “mega-phenomenology” of all the phenomena known to be arising in the major levels and worldspaces (of which our short list above is a very crude example), we create a type of super dictionary (or GigaGlossary) of the location of the referents of most of the major signifiers capable of being uttered by humans (up to this time in evolution) and capable of being understood by humans who possess the adequate corresponding consciousness to bring forth the corresponding signified.

Thus, using our simple list as an example GigaGloss, we can answer some otherwise outlandishly impossible questions very easily. Here are a few examples:

The square root of a negative one is a signifier whose referent exists in the orange worldspace and can be accurately cognized or seen by trained mathematicians who call to mind the correct signifieds via various mathematical injunctions at that altitude and in 3rd-person perspective.

A global eco-system is a signifier whose referent is a very complex multidimensional holarchy existing in a turquoise worldspace; this actual referent can be directly cognized and seen by subjects at a turquoise altitude, in 3rd-person perspective, who study ecological sciences.

Santa Claus is a signifier whose referent exists in a magenta worldspace and can be seen or cognized by subjects at magenta altitude (provided, of course, that their LL-quadrant loads their intersubjective background with the necessary surface structures; this is true for all of these examples, so I will only occasionally mention it).

As for “pure physical objects” (or “sensorimotor objects”), they don’t  exist. The “physical world” is not a perception but an interpretation (or, we might say, the physical world is not a perception but a conceptual perception or “conperception,” which of course also involves perspectives). There is no pregiven world, but simply a series of worlds that come into being (or co-emerge, or are tetra-enacted) with different orders of consciousness. Thus:

A dog as a vital animal spirit exists in a magenta worldspace. A dog as a biological organism exists in an amber worldspace. A dog as a biological organism that is the product of evolution exists in an orange worldspace. A dog as a molecular biological system that is an expression of DNA/RNA sequencing operating through evolving planetary eco-systems exists in a turquoise worldspace.

There simply is no such thing as “the dog” that is the one, true, pregiven dog to which our conceptions give varying representations, but rather different dogs that come into being or are enacted with our evolving concepts and consciousness.


We saw that if we cannot specify the Kosmic address of the perceiver and perceived, we have assertions without evidence, or metaphysics. And we can now see that this also means that we must be able to specify the injunctions necessary for the subject to be able to enact the Kosmic address of the object. The meaning of any assertion is therefore, among other things, the injunctions or means or exemplars for enacting the worldspace in which the referent exists or is said to exist (and where its existence can, in fact, be confirmed or refuted by a community of the adequate).


In particular, the idea that there are levels of being and knowing beyond the physical (i.e., literally meta-physical) is badly in need of reconstruction. This is not to say that there are no trans-physical realities whatsoever; only that most of the items taken to be trans- or meta-physical by the ancients (e.g., feelings, thoughts, ideas) actually have, at the very least, physical correlates. When modernity discovered this fact, it rejected the great wisdom traditions almost in their entirety. Of course, modernity has its own hidden metaphysics (as does postmodernity), but when the great, amber, mythic-metaphysical systems came down, spirituality received a hit from which it has never recovered. What is required is to reconstruct the enduring truths of the great wisdom traditions but without their metaphysics.


Given what an AQAL post-metaphysics discloses, it becomes apparent how well-meaning but still meaningless virtually everything being written about spirituality is. Spiritual treatises are mostly an endless series of ontic assertions about spiritual realities—and assertions with no injunctions, no enactions, no altitude, no perspectives, no Kosmic address of either the perceiver or the perceived. They are, in every sense, meaningless metaphysics, not only plagued with extensively elaborate myths of the given, but riddled with staggering numbers of ontic and assertic claims devoid of justification.



I’m poking fun at straw Less Wrongers and I’m poking fun at straw Ken Wilber, but I’m also deadly serious. I really, really care about this stuff, and I want other people to care about it, too.

I’m not saying the average Less Wronger doesn’t get this.

Of course, say, not-average-Less-Wronger Yudkowsky does get this and then some.

I’m not saying Yudkowsky would agree with stuff here.

I’m not saying I would agree with everything, here (universal love, Kosmic habits…).

(And, dated Ken Wilber is dated, a la General Semantics. He’d write a different book, now.)





Arguably I’m just poking at straw valley of bad rationality, straw vulcanism.



“Rationality,” narrowly defined, is about having reasons. Post-metaphysics is about having referents and instructions on how to access those referents. If someone wants to use logic on me, if someone is having a conversation with me, I pay attention to both the signifieds enacted in my head and my guess as to the signifieds enacted in the other person’s head.

Anyway, I want to go more into this eventually, but one of the reasons I care about all this so much is understanding and consent.

In actual-human-value-land, not pragmatic-operationalized-land, under what conditions can we be said to understand each other? Under what conditions can meaningful consent occur? Under what conditions is ethical coordination happening? Under what conditions will I feel like we’re actually hanging out in the same worldspace together, whether we’re both in the same world? Really seeing each other?

Really interacting with each other as adults doing adult things? This, I long for.




OMG inferential distance.

[CLICK to SUPPORT this blog and BUY utterly unique health, fitness, SEX, intimacy, communication stuff.]


5 thoughts on “Why I am not a rationalist; Or, Integral Post-Metaphysics; And Naturalism; And the Myth of the Given; And Phenomenology; And Worldspaces; And Consent

  1. A few thoughts, not totally coherent/consistent, but if I only posted fully coherent thoughts I would never post anything:

    AIXI doesn’t form ontologies *at all*, but the brain probably does look something like a hierarchy of automated feature extractors that eventually ends up partially encompassing itself.

    I would love to see Yudkowsky comment on this. My model of him says that he would say that there is an objective really-real reality – Tegmark IV multiverse, whatever – within which the algorithms that make us up are embedded, and from-the-inside it’s difficult (maybe impossible) to ever be certain that your signifier of reality (“map”) is pointing roughly at signified reality (“territory”), and at which point he defers to logical positivism and basically asserts that beliefs (“maps”) are only worth having insofar as they correspond to predictions.


    Since I’ve often read him punt to concepts like “magical reality fluid” as a stand-in to get around having to solve certain map/territory, I think he’s at least aware that those map/territory issues exist. He’s very concerned with the idea of a potential FAI being able to navigate through an “ontological crisis,” i.e. that state of finding that what it previously thought was really-real reality is actually just another illusion. Maybe making the FAI with a starting ontology based around “referents and instructions on how to access those referents” could be an approach to this.

    If my model of Yudkowsky is totally wrong, it would be cool to know that, too.

  2. >> A few thoughts, not totally coherent/consistent, but if I only posted fully coherent thoughts I would never post anything:

    Yeah, same here. Thank you for your comments.


    >> AIXI doesn’t form ontologies *at all*

    Hmm. For a practical, realizable approximation of AIXI, how you set up your input bitstream matters. Something has to be chopping up “objective really-real reality” into 1’s and 0’s. And I would argue that this is where *at least* one ontology is hiding. To the computable AIXI approximation, *something* has already done the hard work of chopping up the world into discrete pieces, even if it’s just a webcam. For a “real” AIXI, choice-of-bit-representation of the world wouldn’t matter, as long as it didn’t drop an anvil on its head. For a computable, practical approximation of AIXI, how you chop up the world, whether you run PCA first, is going to potentially to make the difference between “20 minutes” and “several times the lifetime of the universe.”

    Another “ontology” is whatever language you’re using to realize computable AIXI and whatever representation is being used to encode and enumerate AIXI’s probability distributions, that additive constant on Kolmogorov complexity, depending on what language you’re using.


    I guess I’m using “ontology” to mean anything which is computable/countable/enumerable/discretely-traversable-by-a-machine. Scott Aaronson’s digital abstraction layer. Now that might not be fair, because I’m not making a distinction between, uh, “natively computable” and “can approximate with arbitrary accuracy.” (And I’m getting an urge to talk about whether or not consciousness and reality are recursively enumerable, made of computronium; I am pretty skeptical, but perhaps that’s better for another time.)


    >> My model of him says that he would say that there is an objective really-real reality […]

    Thanks for the link; it adds to this post, and I would have worked it in had it been at my mental fingertips. I did read it when it was originally posted, and I thought it was great.

    My model(s)/prior(s) of what he would say on this are at maximal entropy. Totally flat. When I try to simulate Yudkowsky, I feel like I’m working on a Zen koan. Same effect on my brain. 😀

    I guess *I* would say that it’s useful to pretend referents like “tables” and “chairs” are “out there.” But when we start talking about the “objective really-real reality” we’ve only got signifieds (mental models) and more signifiers (equations). We cannot grasp that referent, and its signifieds and signifiers are as numerous as there are people on the planet, some less wrong than others, and they will continue to evolve past the end of science, as long as babies are born and grow up.


    Maybe my point is that everyone is living in their own worldspace. And probably everyone has their own contextual thresholds, visceral switch-flipping (set by nature and nurture) for the felt sense of “System 1, if not System 2, informs me that we have achieved mutual understanding.”


    All the people below have a different “signified” for the signifier/referent “atom,” and that difference will have phenomenological and behavioral consequences:

    A) An ancient greek philosopher (“An atom is…”)
    B) A fifth-grader (“An atom is…”)
    C) A physics major undergrad (“An atom is…”)
    D) A working experimental physicist, named Joe (“An atom is…”)
    E) A 1000-year-old vampire emeritus physicist (“I don’t think in terms of the standard model. Here’s how I think physics is going to be unified and this is what I think is really real behind all the equations…”)
    F) A “physicist” 10,000 years from now, named Fred (“there are no such things as atoms, there are only metatensorthingies in the zero-point-dark-energy wibbly compuperceptronium field. Particles and waves and fields and matter and energy and time and strings are F=ma; Metatensorthingies are what’s actually there.”)

    Joe: “Ok, Fred, but whether “atoms” or “metatensorthingies,” the same *territory* corresponds to both of those signifiers/signifieds. Before we discovered metatensorthingies or refactored physics to create the conceptual category of metatensorthingies (holes/quasiparticles), they were still *there* before the big bang, once the universe got cool enough, whatever.”

    Fred: “Yes!”

    Me: “Yes, but no. Yes, reality still bit back in the same way, grue and bleen aside, but, but…”

    I think fewer people would agree that (A) and (B) are referring to the same referent(s) than, say, (B-F), but I would say *none of them* are referring to the same referent.


    Referents(signifiers/signifieds) matter because we use referents to design even machines that don’t use explicit referents. Referents are “merely” what our algorithms feel like from the inside… but we’ll only ever experience anything from the inside, including our relationship with Moloch or a human-built Elua. And including our relationship with this blog comment or the concepts in this blog comment.

    I wonder if my answer would change if nanomachines were hanging out in my synapses or if an arsenic, quantum-dot, or being-of-pure-energy-and-light alien physically melded its consciousness with mine, and I was thinking partially on a substrate of quantum foam that directly participated in the infinite void or something.


    A born-again Christian is having a conversion experience: They are experiencing Jesus’ love as a concrete, immanent, literal, experiential reality, right now.

    My inner simulator can play a (compelling but provisional, low-fidelity) virtual-reality-brain-bridge of their phenomenal field, everything they’re experiencing within and without, as if I were them. I can imagine what that’s like, what the absolutely concrete, lived reality of that is like.

    And I can also, as myself now, imagine social pressure, cultural expectations, demand characteristics, adrenaline, desperation, vagal tone, oxytocin, dopamine, ancestral environments, memetics, etc.

    They really experienced it. It really happened. They had an experience and they interpreted it, all at once. I do not deny they *had the experience,* that would be violent. But I can have an interpretation of their experience that I believe transcends and includes their interpretation of their experience.

    If I experienced an off-the-scale, identity-shattering, identity-reconfiguring, profound, generalized sense of presence, love, compassion, acceptance, understanding, etc., etc., completely beyond what I ever thought possible… Well, I’d probably check myself for signs of a stroke, ask my friends to keep an eye on me, possibly weep with gratitude, but because whoa brains, whoa perceptronium, not whoa Jesus. And I’d give it a few weeks, months, or years before making any big changes in my life.

    I guess my point is that we cannot transcend our worldspace, our reality is what it is at any particular moment, neurons, Planck scale, and all.

    Map/territory distinctions, map and territory only exist in our brains, which we have reason to belief are correlated with something really-real, but “really-real,” “correlated,” and “my brain made me do it,” are still conscious experiences—you don’t experience the territory that is brain and the territory that is everything besides your brain.


    One might decide that I’m “just” talking about subjective experience, that I’m not weighing in on physics. But what I’m trying to weigh in on is the profoundly constructed, enacted nature of conscious experience and how viscerally, intuitively grasping that can have positive experiential, behavioral, and ethical consequences that may or may not completely overlap with the archetype of a 1000-year-old non-straw-vulcan rationalist vampire.


    Ok, disagreements welcome. 😀

  3. Thanks for your extremely thoughtful response.

    Re: AIXI, yes I agree with everything you said, and I honestly hadn’t thought of choices related to system input and design as ways of sneaking in an ontological frame. What I was actually trying to say is that AIXI itself doesn’t form maps internally, it just looks at every possible map (expressible by its representational framework) and compares that distribution of maps with its observations. Compare this with human cognition, which is almost at the opposite end of the spectrum in terms of trying to compress and reuse maps, often to a fault.

    On reflection, I think I still have some thinking to do about this worldspaces business. Thanks for the intellectual jolt. I’ll probably have more to say after I digest this.

    I am reading the Kosmic Address link and wonder whether FAI/Elua would correspond to an indigo or a violet level perspective, since (as described) Elua is human-chauvinistic.

    > “Matter is not the bottom level of the spectrum of being, but the exterior of every level of the spectrum, and so with each new rung, there is new matter, and the entire world changes, again.”

    (From the pdf.) I guess my current, provisional response is that this assertion should be phrased as a question. Is matter the bottom level of being, or is phenomenology the bottom level of being? From the inside it feels like phenomenology is, but this could merely be a reflection of our lack of understanding. Plus, if you drop an anvil on your head, it seems like matter immediately asserts its precedence.

    I can see how living in the post-metaphysical integral spirituality etc. framework might help one avoid dark night experiences. To reference your post on dark night and disturbance of conceptions of social identity – if you *start out* seeing your identity as a real meaningful object with a particular Kosmic Address (please tell me there is a better term for this) then your identity should retain that cosmic address and remain accessible and coherent to you regardless of what altitude you happen to inhabit.

    > I wonder if my answer would change if nanomachines were hanging out in my synapses or if an arsenic, quantum-dot, or being-of-pure-energy-and-light alien physically melded its consciousness with mine, and I was thinking partially on a substrate of quantum foam that directly participated in the infinite void or something.

    Unless such a being had, like, explicit magical knowledge that we don’t have, I think it should probably feel just as uncertain about its status. Or in other words, imagine a sentient organism comprised of a planet-wide swamp which does computations via convection currents between adjacent bogs, wondering whether it would give a different answer if it were thinking on some kind of fantastic self-wiring electro-chemical architecture inside a mobile, flexible body.

    This is my favorite thing when it comes to illustrating the impossibility of seeing things from any other perspective than “from the inside”:


    Maybe you’ve already seen this, but When I discovered this website I was very deeply impacted. Select ‘2’ at the top and press “play,” and in the bottom-right box you’re watching a deep neural net think about its own concept of 2-ness. Seeing this demonstration really shouldn’t have had such an effect on me since at the time I was already a computationalist and believed that computers could have concepts, but there’s a big difference between believing abstractly and seeing. “What my brain is doing when it thinks about a ‘2’ isn’t fundamentally different from what this ANN is doing.” My ‘2’ may exist in a larger framework that involves more referents, but it’s just a matter of degree.

  4. >> Thanks for the intellectual jolt.



    >> Worldspaces

    I should have been explicit about this, some possible definitions of “worldspace”:

    1. All the thoughts one is potentially capable of having at a given point in time.
    2. The set of all real and imaginary objects (and relationships between those objects) that one can cognize at a given point in time (“democracy,” “love,” “intimacy,” “calculus,” “asymptote”)
    3. The entirety of your “qualia phase space,” the collection of all experiences you’re capable of having.

    For (1-3), this is all that does or can exist to that person. Or at least they sharply suppress the frothy phenomenological flux around the edges of their platonic abstractions.

    (Complication: Two people can be using the same signifier (“dog,” “politics,” “ethics,” “god”) to point at completely different referents or signifieds. There is a qualitative difference between “oh yeah, we should define what we’re talking about” versus a pervasive background process, at the edges of consciousness, that’s always monitoring diverging signifieds in real time.)

    In no particular order, you can deliberately evolve your worldspace(!!!!!!):

    1. Meditation
    2. Take long, aimless walks and get plenty of sleep
    3. Throw yourself into situations (job, social, anything) where you have no idea what’s going on, and it’s uncomfortable
    4. Rationalist “noticing” exercises: “huh, I’m confused,” “huh, I’m surprised,” “huh, that’s funny.”
    5. Lots and lots of quality therapy and/or journaling and/or Focusing/IFS-type stuff
    6. [Not ideal, mixed with other stuff, 1-4 are better:] Embrace your quarterlife or midlife crisis or depression.


    >> whether FAI/Elua would correspond to an indigo or a violet level perspective

    The color levels are attractors for worldspaces. For example, about half the USA is around “amber” – “mythic, ethnocentric, traditional.” The rest of the USA is mostly orange (rational, worldcentric, pragmatic, modern) and green (pluralistic, multicultural, postmodern), with a tiny sprinkling of other levels. European countries, on average, seem to have a “center of gravity” more towards green; The USA might have more variance.

    (Mega abstractions, here: You’ve also got what color levels are encoded in the constitutions and legal systems of countries vs the pop culture zeitgeist. And, the color levels can have “healthy” and “unhealthy” versions, and they are sometimes used ambiguously for the “cognitive” line vs the “ego” line… There’s a lot going on, and only a tiny fraction of it has firm grounding in peer review. The rest is just deliciously, compellingly truthy, to some people.)

    The color levels are “real” (and mindblowing and useful and ethically fraught) but I wouldn’t take them too seriously. (Don’t allow them to become thought-stoppers or use them to pigeon-hole and discard people.) They are explicitly traced back to Jane Loevinger’s “Washington University Sentence Completion Test,” which was a research instrument that was in pretty wide use, as far as I’m aware. She iteratively homed in on a construct that she named “ego level.” If you can get your hands on “Measuring Ego Development, Vol. 1” it’s a pretty reassuring book-length treatment of her methodology, Bayesian reasoning, and all.

    Due to lack of data, her highest level was speculated to be a catch-all bucket for additional higher levels that she couldn’t separate. (There are fewer and fewer people at each subsequent level.) Susanne Cook-Greuter eventually collected a lot more data and chopped Loevinger’s levels into two separate levels, the highest being a new catch-all bucket.

    One the one hand, the levels do seem to be “attractors” in that they are quasi-discrete. On the other hand, their discreteness might be an artifact of the research instrument. And, it’s more of a center of gravity—people will exhibit “behavioral and phenomenological tails” (my words) to levels on either side. And Cook-Greuter further emphasizes (my paraphrasing) that the levels are still big, clunky, leaky abstractions over the messy, idiosyncratic reality of actual individuals. Finally, “ego level” is very important line of development, but there are other lines of development that cross-train and interrelate with each other.

    Finally, finally, I think one or two upper levels are (informed) speculation by Wilber and don’t have any formal data behind them. But the rest of them have data behind them, mostly peer-reviewed, and maybe one or two dissertation-committee only.

    Finally, finally, finally, the descriptions of the later levels get all tangled up with “spirituality” and “values” because there isn’t enough data yet to untangle what’s going on up there, and experimenter and experimental subject biases have a lot more weight.

    The best place to get an intuitive grasp of the levels is a pdf linked here:


    Regarding FAI/Elua, as you sort of alluded to with AIXI, they are more likely to become artifacts with ontologies that humans couldn’t meaningfully apprehend or might function with internal structure that doesn’t seem “ontology”-like or “representation”-like at all. Maybe just semantics. I tend to come at “worldspace” from the subjective side.


    >> Is matter the bottom level of being, or is phenomenology the bottom level of being? From the interior it feels like phenomenology is, but this could merely be a reflection of our lack of understanding. Plus, if you drop an anvil on your head, it seems like matter immediately asserts its precedence. <> I can see how living in the post-metaphysical integral spirituality etc. framework might help one avoid dark night experiences.<> “What my brain is doing when it thinks about a ‘2’ isn’t fundamentally different from what this ANN is doing.”

    Yeah, I’ve seen that exact web page; it still makes my brain convulse a little bit. One of my favorite takes on neural networks is Paul Churchland’s “Plato’s Camera: How the Physical Brain Captures a Landscape of Abstract Universals”

    (That said, someone wrote that the fundamental unit of computation in the brain is the molecule, not the neuron. I like that.)

    The 10,000-year-old [sic] vampire Stephen Grossberg has a 2004 video tutorial online, fast-paced, very high-level that I remember being pretty intensely stimulating when I watched it a few years go:

    Linking Mind to Brain: A tutorial Lecture Series

    Stephen Grossberg, Paul Smolensky, Jurgen Schmidhuber (Neuro/Cog Sci, Grounded Symbolic Computation, AI/ANN) are a few of the people I keep circling back to for startup ideas and personal philosophy/what-the-hell-is-going-on.


    Only other comment is that it might be worth making sharp distinctions between phenomenology, cognitive science, and neuroscience. You can simul-track them and use them to constrain and inform each other, but, critically, you can’t collapse them into each other.

    Wilber has this model with three degrees of freedom: interior/exterior, singular/plural, inside/outside, which gives eight zones:

    *Meditation happens in Zone 1, taking an inside view of the singular interior.
    *Ego levels are disclosed by research in Zone 2, taking an outside view of the singular interior.
    *Neuroscience happens in Zone 6, taking an outside view of the singular exterior.
    *Cognitive science happens in Zone 5, taking an inside view of the singular exterior.

    The AQAL model is an uber-abstraction, ambiguous and leaky (I’m not sure if I could pin things down if someone started asking me fine-grained questions), but potentially very, very stimulating.


    Good times. Questions, vigorous disagreements, self-links, guest-posting, and random links-of-interest welcome.

  5. Pingback: Not even falsifiable: Worldviews as fashion | Rival Voices

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s