Young IBM Researchers Take on a Cognitive Computing Challenge

By Chris Sciacca, IBM Research.

IBM researcher Ton Engbersen believes scientists very soon will be able to build a computer that is comparable in complexity to the human brain. But that is only half the story. He also wants to teach such a machine to learn like the brain as well. And that is where it gets really interesting.

Ton is referring to cognitive computing – the ability of machines to sense, reason, learn, and, in some ways, think. Learning is a key element. These computers will not be based on programs that predetermine every answer or action needed to perform a task; rather, they will be trained with algorithms and through interactions with data and humans.

The coming era of cognitive computing will be the primary focus on Sept. 1 when IBM Research – Zurich conducts a colloquium called, “The New Era of Learning Machines.” To learn more about the new era, you can download a free chapter of Smart Machines: IBM’s Watson and the Era of Cognitive Computing, a new book by IBM Research Dir. John E. Kelly III.

Why do we need cognitive computing? Well, Big Data growth is accelerating as more of the world’s activity is expressed digitally and increasing in volume, speed and uncertainty. Since most data now comes in unstructured forms such as video, images, symbols and natural language, a new computing model is needed in order to make sense of it all.

To help push the envelope on machine learning, Ton has recruited two budding and enthusiastic PhD students in computer science, Adela-Diana Almasi from Romania and Stanislaw Wozniak from Poland (no relation to the Apple co-founder).  The two were participants in IBM Research’s internship program and are now in the pre-doctoral research program.

Adela-Diana Almasi, IBM Researcher
Adela-Diana Almasi, IBM Researcher

Stanislaw explains that their research fits between neuromorphic computing and its biological counter-discipline, computational neuroscience: “They are both very multi-dimensional fields often causing confusion due to all of the ‘brain talk’, but regardless, our research will take the best of both worlds ”

Adela-Diana agrees: “We are trying build a machine learning algorithm that is more biologically-inspired than the ones that are currently used, but we need to find just the right level of abstraction so that it’s computationally feasible.”

Finding the right balance is particularly challenging ,since the brain is not even fully understood at a biological level. “It’s like trying to build a puzzle, when you don’t know what the picture is and you are blind folded,” she says. “As I consider this challenge, I become less of a computer scientist and more of a philosopher.”

Typically, a lot of human effort is required to train computers. The two interns are working on a system that, like the brain, learns primarily from experience—in the computer’s case, from encounters with data. Take for example weather prediction. This kind of learning system will make predictions about the weather based on historical information and sensor readings; then, as it learns from its mistakes, it will continually get better at predicting.

Stanislaw Wozniak, IBM Researcher
Stanislaw Wozniak, IBM Researcher

The two tried a new approach to optical character recognition (OCR) based on their concept. OCR is one of the most common uses of machine learning. They developed an algorithm and assembled a large-scale neural network using off-the shelf graphics cards of the type found in high-end gaming PCs. While most OCR systems are very narrowly focused, they wanted to create one that operated more like the brain, with more of a general-purpose intelligence.

While they logged some early successes, when they tried the system out on a large amount of data, the accuracy rate plummeted. “It’s a lot harder than it looks, says Stanislaw.  “It turned out we need to focus more on a proof of concept before the implementation. “

Now the team is starting to develop a different approach to the same problem—taking their inspiration from the way neurons work in the human brain. This will require a new learning algorithm, so, in a sense, they’re back to square one.

The good news is, as pre-docs, the team has three years before they graduate. The bad news is that Big Data is growing exponentially every day and the need for such a learning system grows with it.

Adela-Diana, who often confers with her grandmother, a biology teacher in Romania, remains confident. She says: “Nature has provided us with a very challenging riddle to solve, but with trial, error and some luck, we will crack it.”

This post originally appeared on A Smarter Planet Blog.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.