Human head made of machine gear

Your Next Computer Will Be Able to Think Like You (And Be Your Robotic Companion)

by admin

here you are, ambling along a formidable stretch of the Great Wall of China. You speak no Chinese, and yet, though your translator, you’re able to understand the nuances of its history and architecture. He not only interprets what the tour guide and fellow travelers say, but he also answers all your questions with a quick, yet comprehensive, store of knowledge.

He never tires and he doesn’t complain, and after a few brief stops, he picks up on your fatigue and suggests a break to take in the scenery. As you gaze at one of the great wonders of the world, you realize his near-magical ability to sense your needs — even before you can articulate them — has enhanced the experience.

From Rosie the Robot in “The Jetsons” to Pixar’s “Wall-E,” or Sonny from “I, Robot” to Johnny Five in “Short Circuit,” we’ve been enchanted by the idea that machines may one day become true companions, thinking and even feeling the way we do. But as Siri can often clumsily show, that reality is still just a cultural fantasy.

Today’s machines aren’t designed to quickly process, learn and adapt its actions to data. Instead, they crunch information linearly, and then spit out a result to execute. And if they should run into a situation without a solution, they simply stop working — or, in Siri’s case, ask you again and again if it should search the Web.

Yet programmers haven’t given up on the idea of genuinely interactive machines whose “thinking” more closely reflects our own — robots that can learn and then adjust their knowledge and actions accordingly. IBM scientists recently announced a big step towards that pipe dream, creating a chip that takes a page from our own greatest thinking machine — the human brain. But are we ready to develop an artificial consciousness when we have yet to fully understand our own thoughts?

The Human Brain: Nature’s Ultimate Computer Chip

The brain is the center of the nervous system, able to exert control over all other organs in the body. Its neural tissue is layered and folded in a way to maximize a surface area, which houses over 100 billion nerve cells. By exchanging electrical and chemical signals among themselves, these specialized cells, called neurons, can recognize a face from 150 feet away, hold a lively conversation and remember over 70 years’ worth of memories, ready access at a moment’s notice.

Those tasks sound simple to do, but cognitively speaking, they’re highly complex. In fact, not even the world’s most powerful machines can keep up with the mental capacities of a rambunctious four-year-old.

Why? According to MIT Technology Review, our complex network of neurons is extremely efficient, processing vast amounts of data using low amounts of power. It also has a remarkable ability to adapt and change, and as we learn and gain experiences, neural pathways are created and reorganized in ways we don’t yet fully understand.

Attempts to Emulate the Brain

The brain also excels at carrying out multiple tasks at the same time. Neurons work in parallel, producing an intricate web of signals that pulse back and forth, triggering one another. They break down complex tasks. So when you survey a scene, for example, visual data is divided into four components — color, motion, shape and depth — which are then analyzed individually and then combined back into a single image we understand.

Computers, meanwhile, work in a step-by-step fashion, which is why they need immense amounts of computing power to approach even a fraction of what the brain is capable of. Japan’s K Computer, for example, needed over 80,000 processors and 40 minutes to simulate just one second of neural brain activity. And at the current pace of advancement, processors won’t be able to fully emulate the complexities of the brain until the 2050s.

The Dawn of Cognitive Computing

Scientists have tried to approximate the brain with artificial intelligence. According to Discover Magazine, in 1992, Masuo Aizawa, a biochemist at the Tokyo Institute of Technology, grafted neurons to silicon pieces in the hopes of creating primitive biological circuitry.

Contrary to modern processors, garnished with millions of tiny transistors, his attempt was crude, to say the least — a glass slide of electronic stripes, sitting at the bottom of a plastic dish splattered with a clear liquid. But those globs were, in fact, lab-grown neurons, and the beginnings of a cell-by-cell construction of an artificial brain.

Building Blocks of Artificial Consciousness

In 2011, scientists at the University of Florida marinated a silicon chip with neurons and stem cells, extending Aizawa’s idea of a “brain in a dish.” Then they connected the “petri-brain” to a flight simulator program that fed it the plane’s horizontal and vertical coordinates. By using electrodes to stimulate neurons, researchers were able to fire off instructions to a “body” — or in this case, the simulator.

“Right now the process it’s learning is very simplistic,” Thomas DeMarse, the lead researcher, told Wired. “It’s basically making a decision about whether to move the stick to the left or to the right or forwards and backwards and it learns how much to push the stick depending upon how badly the aircraft is flying.”

Researchers hope, one day, to use the bursts of activity to “reboot” quiet areas of the brain, for example, after a stroke.

Software Solutions: Mimicking the Human Brain

Attempts to use hardware to replicate the brain are mostly contained to petri dishes, but researchers have also looked to software for solutions. The Blue Brain Project at France’s renowned Ecole Polytechnique Federale de Lausanne uses supercomputers to try to create software that, essentially, mimics the human brain, right down to the last neuron.

“The human brain is an immensely powerful, energy-efficient, self-learning, self-repairing computer,” according to Blue Brain’s website. “If we could understand and mimic the way it works, we could revolutionize information technology, medicine and society.”

Neural-Inspired Chips: Bridging the Gap

The brain is difficult to emulate, to say the least, but computer makers are coming just a bit closer to that goal. In 2011, long-time powerhouse IBM took the brain’s ability of parallel processing and applied it a computer chip. By packing together a network of “neuralsynaptic cores,” which echoes the functions of neurons, it is able to process different signals at once. According to MIT Technology Review, the chip analyzes information at the place it is received, instead of shunting it to a different circuit or part of the machine. That not only improves the speed and performance, but also reduces the power needed.

For example, one sensor can learn to play a game, while another helps a device move in the right direction and a third processes visual information — all working in tandem on their separate functions. With $21 million in DARPA funding, IBM plans to use these chips to form the foundation of a more powerful, agile supercomputer.

Computers that use conventional cores, like IBM’s own Watson machine, which in 2011 defeated two of the greatest “Jeopardy!” champions ever, can still solve complex tasks by routing data to a central location for processing, and then moving them to specialized components for further crunching, but they need huge amounts of capacity and power. Watson required a whopping 16-terabytes of memory and a cluster of powerful servers to beat the best players, but the brain needs only 10-watts to execute its dazzling array of tasks.

True North Programming Approach

In the years since IBM unveiled its super-chip, though, no programming method could take full advantage of its neural-inspired, parallel features. Traditional programming follows what’s known as the “Von Neumann” architecture, a linear approach that reads instructions in a line-by-line sequence. But earlier this month, the company announced a programming approach, called “True North,” which more effectively uses those brain-inspired CPUs.

According to MIT Technology Review, True North writes programs with special blueprints, called “corelets,” which functions as senses. Using a library of 150 pre-designed corelets — one for a particular task — developers can activate one to sense motion, for example, while using another to measure direction, and a third to work with color. Corelets can be linked to, or even nested in, one another in increasingly complex structures — much like the way our complex senses, like vision, links various functions in the brain into one seamless task — all parts working at the same time.

“It doesn’t make sense to take a programming language from the previous era and try to adapt it to a new architecture. It’s like a square peg in a round hole,” Dharmendra Modha, lead researcher, told MIT Technology Review. “You have to rethink the very notion of what programming means.”

With sensory input from corelets, programmers can develop software that more closely resembles the way the brain processes information. Using True North, they could code software to predict atmospheric changes, create lenses to help the blind identify objects and develop customer service programs that use voice recognition to analyze a person’s tone of voice, and adjust its behavior depending on the emotions it detects.

Integration of Approaches: Towards Human-Like Cognitive Abilities

The point isn’t to replace the old Von Neumann model, which is good at crunching numbers and information, but rather, in the future, to use both the linear “left-brain” Von Neumann approach, as well as the more diffuse “right-brain” corelets to create a versatile machine capable of nearly human-like cognitive abilities. If a machine can ever learn to see and perceive like a human, it will need both approaches — much like how our brains have both a left and right hemisphere dedicated to certain domains and functions.

IBM’s Breakthrough in Cognitive Computing

IBM’s experiments are part of its larger push toward “cognitive computing,” in which machines interact and learn from people. Cognitive systems are different from linear, data-crunching systems: their algorithms sense, infer, predict and, in fact, “think,” adapting and learning from ongoing streams of data. Designed to improve over time, they learn a subject and take in its terminology, processes, rules and preferred interactions.

Role in the Age of Big Data

Cognitive computing is expected to play a key role in this age of “Big Data,” where the volume and speed of accumulated data — and the complexity of the data itself, in the form of images, video, symbols and natural language — is rapidly accelerating. Machines that use the brain as their model will have a particular edge when confronting that mountain of data, as they’re process more information simultaneously, at faster rates, with an ability to “learn from” it at the same time.

IBM’s Cognitive Computing Initiatives

IBM has long experimented with cognitive computing, having first created Watson and now neural-synaptic chips. But rivals like Google are getting into the game, as well, hiring inventor and futurist Ray Kurzweil last year as a director of engineering.

Ray Kurzweil’s Role at Google

Kurzweil, who invented the flatbed scanner and commercial text-to-speech synthesizer, is known as one of technology’s foremost proponents of artificial intelligence. He believes machines can, and will, one day develop consciousness, according to Wired. At Google, his role is to help develop better natural language recognition for its search engine — not just to find and log keywords, but to learn and understand language like we do.

Challenges and Ethical Considerations

1. The Complexity of Human Consciousness

Not everyone, though, is excited about cognitive computing. The idea — to pattern hardware and software after the structure of the brain — reminds scientists that the brain itself is still a mystery to us. It is one thing to understand its physical functions, but another to answer how the brain becomes a mind.

2. Beyond Logic and Reasoning

Human consciousness is much more than logic and reasoning. After all, can a computer learn to tell a joke? Or perhaps flirt? Are there secret algorithms to program empathy? And can a machine learn to feel longing, yearning, joy or sorrow?

We don’t yet fully understand the way processes like memory, emotions and reasoning work, and the idea that we’re programming machines to think like us — when we have barely scratched the surface of understanding human consciousness ourselves — is deeply unsettling for many.

3. Divergent Views Among Tech Leaders

Even tech leaders are divided on the overarching project of endowing machines with a kind of consciousness. In 2008, Bill Joy, co-founder of Sun Microsystems, for example, expressed skepticism and fear of a dystopian world where machines have a “mind” of their own, according to Wired. And yet, computer scientists like Kurzweil and IBM’s team press on.

4. Kurzweil’s Vision

Kurzweil believes computers will, at least, achieve natural language understanding, so they better understand us — and he’s willing to state a date for that milestone. “I’ve had a consistent date of 2029 for that vision,” he told Wired. “And that doesn’t just mean logical intelligence. It means emotional intelligence — being funny, getting the joke, being sexy, being loving, understanding human emotion. That’s actually the most complex thing we do. That is what separates computers and humans today. I believe that gap will close by 2029.”

5. Advancements in Cognitive Computing

For that to happen, cognitive computing needs to take its next leap, creating components to stitch together into super-machines we can only begin to dream about. While our sci-fi fantasies and nightmares have explored the idea of computers endowed with human consciousness, those scenarios are edging closer to reality, faster than many had expected. The big ideas behind tiny chips will form the neurons and pathways of a machine consciousness. Though it seems far-fetched, entire worlds have been founded on much less.

Related articles

Young man taking a picture of himself
Smartphones: Filmmaking Tools from Short Films to Oscar Winners

Smartphones are rising as a viable filmmaking tool. Of course, the iPhone is often the default choice of consumers looking…

3 mobile phones lined up side by side
The Mobile Web Revolution

As pages grow in size, people’s patience in browsing has decreased, leading to a gap in expectations and the sluggish…

Mike Lazaridis
The Early Life of BlackBerry Owner Mike Lazaridis

Early Life and Signs of Brilliance Mike Lazaridis was born in Istanbul, Turkey in 1960 to a working-class Greek family.…

Ready to get started?

Purchase your first license and see why 1,500,000+ websites globally around the world trust us.