[New? Start here: https://meditationstuff.wordpress.com/articles/]
This is a rambling mess and comes from a conversation with a peer. I didn’t want it to get lost. A bunch of key explications are from him; mistakes are mine.
Slightly less related but still pretty related:
“Virtue of the void…” basically the idea that that master or guru actually did the thing by trying to do the thing. And now he’s trying to talk about it, to sell it (not necessarily in a bad sense) and to help people get there more easily than he did. So he gives some sort of instructions. “But people might focus too much on the instructions and less on actually doing the thing.” This is similar to Yudkowsky’s virtues of rationality thing where he references musashi (don’t do the thing, do the thing you’re actually trying to do by doing the thing) and CFAR’s “key insight behind all the techniques” thing, so you can modify and make your own techniques, as needed. Want to be building up a concept of what you’re doing and why you’re doing it. And if that’s not evolving or changing over time then something’s probably wrong because it’s highly unlikely you’ll have the right concepts right off the bat.
Sort of quinian bootstrapping: signifiers first and only hypothetical signifieds and incorrect referents first. And eventually you’ll hopefully hit upon the right referents and the correct signifieds. Might be refined or completely wrong several times. “Oh, that’s what e means. Oh wait no, *that’s* what e means…” (In fact, elegantly designed instructions can actually support this, supporting multiple levels of further and further refined understanding, even those with discontinuous jumps between them. Docs that can be read at multiple levels of meaning.)
This also has to sort of do with the degeneration of scientific or methodological cultures, in a way. Logical positivists, behaviorists… Somehow old knowledge was rejected, even though much of it was useful. What’s going on here? One good idea and then overreaction, or rather, over application to everything or overly rigid application, or application at the expense of everything else? One good idea and then politics and social gaming for status and resources? For the logical positivists, the behaviorists, the postmodernists, it’s like there’s are some good ideas in there but there’s also a performative contradiction.
More: Explication of a good idea, and strategically explicating what was important at the time, but important implicit stuff was lost as people died and new people came in? Because it wasn’t emphasized? Because it was taken for granted? So it degenerated over time or was lost because of discontinuities—deaths where the master didn’t have a student and so forth.
So, one thing to do is to be aware of all these issues and to try to be really, really clear about how to do the thing. I would think that, inevitably something is still going to be lost in translation, because you left something out that wasn’t salient to you but would be critically important to other people because they didn’t take the path that you did and have different experiences and don’t automatically, effortlessly, already do this tacit thing X that’s absolutely necessary. They need to stumble on it, reason it out, or be told. (And those non-automatic, need-to-be-told things could change from culture to culture and time to time.)
So, another thing besides just trying to be really clear and complete (and also besides bringing up all the issues above right in the instructions which could be really helpful!) is to explicitly bring out dualities. So, come down really hard on one side and then, consciously, deliberately back up and say the opposite is important too.
Really, part of the issue is that instructions need to be compressed to some extent or that instructions are lossy compression, especially with like algorithms to follow. For example, you should do X often, but sometimes you shouldn’t, and sometimes you should do Y, but you do Y only after Z except when Q happens and in those cases you still do X. And there’s all this other stuff you should probably be doing between the capital letter conditions, and those may or may not have their own set of intertwining lower case letters, or whatever.
And maybe you could actually chop things up completely differently so that it’s not so complicated (not so many letters) but so it still gets at all the important stuff (or you then inevitably leave yet something else out…).
Another important thing is sort of having instructions about following instructions, suggestions about when to deviate and when not to deviate and when you especially should deviate and when you especially shouldn’t deviate, and yet even again a layer above that of when you should deviate from *that.* (Sort of gets into metadiscourse-type stuff.) (The focusing six steps online does the first layer, and I’ve got multiple things on my blog about self-determination and ripping into instructions and remaking them into your own thing… so sort of getting behind them, getting at the actually referents or the undifferentiated phenomenological ground, what’s actually going on.)
And that gets into rule-governed vs environmentally contingent behavior. And also my thing, I have no idea where it went, about how objects (concepts, implicit mental models), in some sense, actually allow for non-environmentally contingent behavior. This is true registration, in the Brian Cantwell Smith On the Origin of Objects sense.