More specifically:

How to interface human brain and machine

Well, so everyone is just craving to insert some rods and needles and chips and so on into our precious gray matter. I don't, and I figure no one actually needs to. Don't we already have millions (or more) input-output channels? And don't they work great?

So the actual problem usually discussed is how machine should understand our thoughts. Another good question: but should it understand them?

By the way, I think this post requires even more of a name-changing.

So even more specifically:

How to interface human brain and machine brain

Back to the question: no, it should not. Think of it this way: does our visual cortex understand our thoughts? Or do our hands understand them? If we intend to use machines directly, meaning, command the machine the same way we would command anything in our body, we need not machine to understand. Instead, we need to be in the machine. Well, at least partially, that is.

It is nearly (right now — completely) impossible to understand just how a given neural network works judging by its neurons or activities inside. We know the general governing principle (after all, we designed them), but after performing the training step, we know nothing beyond the fact that it is an approximation of some generalized function. So how sticking needles into the brain gonna help, when we don't even remotely understand the governing principles of our brain? Will we?.. will we suddenly understand them, just by observing? and if so, how can it help with understanding the thoughts?

brain

We don't know how one works, we don't have a faintest idea how the other works, so let's just… combine them!

Despite how it sounds, it makes quite a good sense. While we can not reasonably change and fine-tune a neural network inside our brain, we can do that a fair amount with the artificial neural network in the machine. What we really need is to "fool" the brain into accessing this external neural network as it would access… well, itself.

"But that says nothing about the (un)necessity of brain surgeries!" you would say. I would say, "No, it doesn't. But…"

But, the point is that the brain needs not to access this neural network directly. It can easily connect with the neural network through different means and that is how it has been naturally for a veeery long time.

There is a reason why Turing test works so good: we can feel intelligence. Not because it seems intelligent, or it can trick us, or it speaks well. No. Someone feels intelligent because the communication is between two intelligent beings. We tune ourselves to the other being, and a being on the other end tunes itself to us. That is what we feel. So, in a sense, it is already an almost direct connection, and the only reason why it is so ineffective (in a very relative sense, since our communication with machines is so   much   more   ineffective) is because communicate historically over very complex and imprecise channels. That is, through analog audio and through limited (and quite hard to understand) visual motions, constrained by physics of the surrounding world.

Final part (also, TL;DR)

So. What I suggest we need:

  1. Get some goggles with a screen (quite preferably, not 3D VR ones, to not confuse the eyes; the screens needs to lie in one single plane).
  2. Wire a set of gloves to capture finger and hand movements (or some other sensing technology).
  3. Devise a two-way deep learning network, with one set of input and output being gloves and goggles and another being some interface-application.
  4. Design an unsupervised training algorithm (an approach similar to the one described by Ian Goodfellow in his paper) to train the communication channel between the human and machine.
  5. PROFIT Hopefully, that will be enough. Almost Cyborg Deep Learning. Also, given the fact that Deep Learning Networks are inferior in their self-awareness capabilities, we can be sure that it is the human that will be in control.

The part about the training algorithm I kind of skipped before, so I'll elaborate on it here. We require some method, that will try to provide a distinctive and (quite importantly) meaningful output on the screen using the application-interface and input from the glove. Meaningful doesn't mean that it is an understandable image of some sort, instead it means that the image necessarily is different on some scale based on the data and not just randomly. It is easier to say what we don't want: 1) we don't want the image to reflect the glove movements only, 2) we don't want the image to reflect the application-interface only, 3) we don't want image to be (almost) random and unpredictable, and (most important) 4) we don't want the image to simply reflect some user interface and a mouse pointer…

Thinking

Though it may seem overly complex and too far fetched, try also seeing it this way:

  1. Gloves and goggles aren't something that requires tons of effort to develop (and basically are almost readily accessible)
  2. The kind of deep learning network required is almost interchangeably similar to the one required with "needles"
  3. The training algorithm developed may have lots of other uses (and may also be required with the "needles").

The general takeaway from this is that I would say, before sticking those "needles" (I just like how eerie it sounds) in the brain, there are some things that could be attempted before.

P.S. Comments and discussions are heartfully welcome ^_^

Cover artwork: "Happy colors" by Kevin Dooley used under Creative Commons Attribution 2.0 license