[New? Start here: https://meditationstuff.wordpress.com/articles/]
[This is a one-sided, not-fully-thought-through, and poorly edited response to an ongoing thread on the google group. There’s been an ongoing theme of what you can actually do with a human brain–what are the degrees of freedom and what’s the opportunity cost?
I put a lot of emphasis on working effectively with consciousness in it’s “natural” state. And, in that state, I’m am in some sense treating the unconscious as an oracle. Or, rather, something that feels somewhat phenomenologically separate that’s governed by laws not yet fully understood.
I haven’t put a lot of thought or effort to working within cultivated states of consciousness. I think I’ve mentioned that I got bored after playing with the second jhana for like twenty minutes and got bored after a few deliberate lucid dreams. (I have spontaneous lucid dreams every once in a while and I sometimes do informal experiments in them.) This lack of interest in cultivated states of consciousness is partially a personal bias for a variety of reasons which I won’t go into here.
More generally applicable is my heuristic for learning to work with consciousness effectively no matter what state one is in, which places a natural emphasis on working with day-to-day waking consciousness. Another reason is my general pessimism for the ongoing utility of altered states. I think even someone like Ingram has to spend a few days on retreat before he has a perfect “read-write” conscious workspace. And, at that point, my impression is that contact with consensus reality is tenuous at best. But, I’ve never actually had that conversation with him or anyone.
Now, that’s not to say that something interesting might be possible at moderate or large opportunity cost. People with brain damage do sometimes get genuine “superhuman” powers, and sometimes in the mathy domain (like actually useful math synesthesia and lots more). This seems to have something to do with disinhibition. That is, one part of the brain keeps another part from firing as hard as it could, for whatever reason, and it’s like brain damage can remove the brakes. It’s plausible that one could permanently knock out some [inhibitory] circuits–I suspect that that’s exactly what classical buddhist enlightenment *is.* Again I’m pessimistic that consciousness/meditation has the right levers to do this for arbitrary circuits, at reasonable opportunity cost, at least, but maybe not.
Another thing is the “quality” of consciousness. Meditation, say jhanic meditation, typically removes aspects of consciousness. And then what’s left is more obvious and possibly more directly accessible and manipulable. That could be enough. Another possibility is “phase changes” in consciousness–like, *maybe* one get access to qualia and mental moves that truly aren’t possible in normal states. Maybe. I don’t think it’s the case that truly “unconscious” processes (always otherwise unconscious) could become “conscious,” but I can think of plausible arguments where that could actually be true.
I don’t have your intuition about digits of precision within the human nervous system. You may be interested in the work of Paul Smolensky and also Douglas Hofstadter’s “Waking from the Boolean Dream.” For a long time, I’ve been interested in the interplay between symbolic and nonsymbolic processes in the human brain, and in addition the neural instantiation of both. Smolensky is working on high-level but neurally plausible models for how symbolic activity and logical operations almost “ride on top of” nonsymbolic machinery.
There’s also the question of how much of this is actually accessible to consciousness. I’m more of mind that anything resembling logical operations in consciousness is something of an illusion. That’s not to say logic isn’t real and apprehended in consciousness, just that true “logic” (as opposed to informal prose, poetry, and holophrasis) is an achievement, not a native mode of the human brain. I think that “symbolic lock” (compositionality) and “logical interlock” (useful symbolic computation) are more a byproduct than how a brain actually arrives at answers like 99% of the time. I don’t think the brain is using some sort of hyper-efficient logical encoding–I think anything resembling efficiency-without-loss-of-precision is more likely to be domain-specific evolutionary hacks. I could be totally wrong though.
I’m of a mind that eureka moments are produced by vast, impersonal waves of updates and feedback amongst many, many intertwined submodules–surely that’s lawful, and I bet we’ll eventually have a handful of equations that explain the whole thing. But, I think it’s unlikely those equations will describe the instantiation of logical or symbolic operations. Epiphenomenal? And, yet, symbols or, rather, “percepts” or whatever do seemed to get passed around, and long chains of inference during sleep are a thing. I could be wrong…