December 13, 2010

Athlete Robot Learning to Run Like Human

(December 13, 2010)  Japanese researcher Ryuma Niiyama wants to build a biped robot that runs.

But not like Asimo, whose running gait is a bit, well, mechanical.

Niiyama wants a robot with the vigor and agility of a human sprinter.

To do that, he's building a legged bot that mimics our musculoskeletal system.

He calls his robot Athlete. Each leg has seven sets of artificial muscles. The sets, each with one to six pneumatic actuators, correspond to muscles in the human body -- gluteus maximus, adductor, hamstring, and so forth

To simplify things a bit, the robot uses prosthetic blades, of the type that double amputees use to run.

And to add a human touch, Niiyama makes the robot wear a pair of black shorts.

Ryuma Niiyama, Web Page >>

December 2, 2010

Meet MoNETA - the brain-inspired chip that will outsmart us all

The Brain of a New Machine

(December 2, 2010)  Stop us if you’ve heard this one before: In the near future, we’ll be able to build machines that learn, reason, and even emote their way to solving problems, the way people do. If you’ve ever been interested in artificial intelligence, you’ve seen that promise broken countless times. Way back in the 1960s, the relatively recent invention of the transistor prompted breathless predictions that machines would outsmart their human handlers within 20 years. Now, 50 years later, it seems the best we can do is automated tech support, intoned with a preternatural calm that may or may not send callers into a murderous rage.

So why should you believe us when we say we finally have the technology that will lead to a true artificial intelligence? Because of MoNETA, the brain on a chip. MoNETA (Modular Neural Exploring Traveling Agent) is the software we’re designing at Boston University’s department of cognitive and neural systems, which will run on a braininspired microprocessor under development at HP Labs in California. It will function according to the principles that distinguish us mammals most profoundly from our fast but witless machines. MoNETA (the goddess of memory—cute, huh?) will do things no computer ever has. It will perceive its surroundings, decide which information is useful, integrate that information into the emerging structure of its reality, and in some applications, formulate plans that will ensure its survival. In other words, MoNETA will be motivated by the same drives that motivate cockroaches, cats, and humans.

read entire article (pdf) >>

Strange Discovery: Bacteria Built with Arsenic

(December 2, 2010)  In a study that could rewrite biology textbooks, scientists have found the first known living organism that incorporates arsenic into the working parts of its cells. What's more, the arsenic replaces phosphorus, an element long thought essential for life. The results, based on experiments at the Stanford Synchrotron Radiation Lightsource, were published online today in Science Express.

"It seems that this particular strain of bacteria has actually evolved in a way that it can use arsenic instead of phosphorus to grow and produce life," said SSRL Staff Scientist Sam Webb, who led the research at the Department of Energy's SLAC National Accelerator Laboratory. "Given that arsenic is usually toxic, this finding is particularly surprising."

Phosphorus forms part of the chemical backbone of DNA and RNA, the spiraling structures that carry genetic instructions for life. It is also a central component of ATP, which transports the chemical energy needed for metabolism within cells. Scientists have for decades thought that life could not survive without it.

read entire press release

November 30, 2010

Your brain on culture

The burgeoning field of cultural neuroscience is finding that culture influences brain development, and perhaps vice versa.

(November, 2010)  When an American thinks about whether he is honest, his brain activity looks very different than when he thinks about whether another person is honest, even a close relative. That’s not true for Chinese people. When a Chinese man evaluates whether he is honest, his brain activity looks almost identical to when he is thinking about whether his mother is honest.

That finding — that American and Chinese brains function differently when considering traits of themselves versus traits of others (Neuroimage, Vol. 34, No. 3) — supports behavioral studies that have found that people from collectivist cultures, such as China, think of themselves as deeply connected to other people in their lives, while Americans adhere to a strong sense of individuality.

The study also shows the power of cultural neuroscience, the growing field that uses brain-imaging technology to deepen the understanding of how environment and beliefs can shape mental function. Barely heard of just five years ago, the field has become a vibrant area of research, and the University of Michigan, the University of California, Los Angeles, and Emory University have created cultural neuroscience centers. In addition, in April a cultural neuroscience meeting at the University of Michigan attracted such psychology luminaries as Hazel Markus, PhD, Michael Posner, PhD, Steve Suomi, PhD, and Claude Steele, PhD, to discuss their work in the context of cultural neuroscience.


November 24, 2010

Jet lagged and forgetful? It’s no coincidence

(November 24, 2010)  Chronic jet lag alters the brain in ways that cause memory and learning problems long after one’s return to a regular 24-hour schedule, according to research by University of California, Berkeley, psychologists.

Twice a week for four weeks, the researchers subjected female Syrian hamsters to six-hour time shifts – the equivalent of a New York-to-Paris airplane flight. During the last two weeks of jet lag and a month after recovery from it, the hamsters’ performance on learning and memory tasks was measured.

As expected, during the jet lag period, the hamsters had trouble learning simple tasks that the hamsters in the control group aced. What surprised the researchers was that these deficits persisted for a month after the hamsters returned to a regular day-night schedule.

What’s more, the researchers discovered persistent changes in the brain, specifically within the hippocampus, a part of the brain that plays an intricate role in memory processing. They found that, compared to the hamsters in the control group, the jet-lagged hamsters had only half the number of new neurons in the hippocampus following the month long exposure to jet lag. New neurons are constantly being added to the adult hippocampus and are thought to be important for hippocampal-dependent learning, Kriegsfeld said, while memory problems are associated with a drop in cell maturation in this brain structure.

read entire press release >>

November 22, 2010

Mindfulness practice leads to increases in regional brain gray matter density

(November 22, 2010)  Therapeutic interventions that incorporate training in mindfulness meditation have become increasingly popular, but to date, little is known about neural mechanisms associated with these interventions. Mindfulness-Based Stress Reduction (MBSR), one of the most widely used mindfulness training programs, has been reported to produce positive effects on psychological well-being and to ameliorate symptoms of a number of disorders. Here, we report a controlled longitudinal study to investigate pre-post changes in brain gray matter concentration attributable to participation in an MBSR program. Anatomical MRI images from sixteen healthy, meditation-naïve participants were obtained before and after they underwent the eight-week program. Changes in gray matter concentration were investigated using voxel-based morphometry, and compared to a wait-list control group of 17 individuals. Analyses in a priori regions of interest confirmed increases in gray matter concentration within the left hippocampus. Whole brain analyses identified increases in the posterior cingulate cortex, the temporo-parietal junction, and the cerebellum in the MBSR group compared to the controls. The results suggest that participation in MBSR is associated with changes in gray matter concentration in brain regions involved in learning and memory processes, emotion regulation, self-referential processing, and perspective taking.

image >>

October 27, 2010

Brain Control

(October 27, 2010)  Ed Boyden is learning how to alter behavior by using light to turn neurons on and off.

The equipment in Ed Boyden’s lab at MIT is nothing if not eclectic. There are machines for analyzing and assembling genes; a 3-D printer; a laser cutter capable of carving an object out of a block of metal; apparatus for cultivating and studying bacteria, plants, and fungi; a machine for preparing ultrathin slices of the brain; tools for analyzing electronic circuits; a series of high-resolution imaging devices. But what Boyden is most eager to show off is a small, ugly thing that looks like a hairy plastic tooth. It’s actually the housing for about a dozen short optical fibers of different lengths, each fixed at one end to a light-emitting diode. When the tooth is implanted in, say, the brain of a mouse, each of those LEDs can deliver light to a different location. Using the device, Boyden can begin to control aspects of the mouse’s behavior.

Mouse brains, or any other brains, wouldn’t normally respond to embedded lights. But Boyden, who has appointments at MIT as eclectic as his lab equipment (assistant professor at the Media Lab, joint professor in the Department of Biological Engineering and the Department of Brain and Cognitive Sciences, and leader of the Synthetic Neurobiology Group), has modified certain brain cells with genes that make light-sensitive proteins in plants, fungi, and bacteria. Because the proteins cause the brains cells to fire when exposed to light, they give Boyden a way to turn the genetically engineered neurons on and off.

read entire press  release

Controlling Individual Cortical Nerve Cells by Human Thought

(October 27, 2010)  Five years ago, neuroscientist Christof Koch of the California Institute of Technology (Caltech), neurosurgeon Itzhak Fried of UCLA, and their colleagues discovered that a single neuron in the human brain can function much like a sophisticated computer and recognize people, landmarks, and objects, suggesting that a consistent and explicit code may help transform complex visual representations into long-term and more abstract memories.

Now Koch and Fried, along with former Caltech graduate student and current postdoctoral fellow Moran Cerf, have found that individuals can exert conscious control over the firing of these single neurons—despite the neurons' location in an area of the brain previously thought inaccessible to conscious control—and, in doing so, manipulate the behavior of an image on a computer screen.

journal reference >>

October 14, 2010

I want to see what you see: Babies treat “social robots” as sentient beings

(October 14, 2010)  Andrew Meltzoff, co-director of the University of Washington’s Institute for Learning and Brain Sciences, and Rajesh Rao, University of Washington associate professor of computer science and engineering, with the humanoid robot used to demonstrate “social” interactions to babies.

Neural Networks
Diagram of the test phase. Top panel: The baby sits across from the robot. Middle panel: Robot turns its “head” toward a toy. Babies who did not watch the robot play games with the researcher did not look to see where the robot looked. Bottom panel: Babies who had watched the robot play games with the researcher followed the robot’s “gaze.” They wanted to see what the robot was seeing.
Babies are curious about nearly everything, and they’re especially interested in what their adult companions are doing. Touch your tummy, they’ll touch their own tummies. Wave your hands in the air, they’ll wave their own hands. Turn your head to look at a toy, they’ll follow your eyes to see what’s so exciting.

read entire press release >>

October 6, 2010

Amazing Robot Controlled By Rat Brain Continues Progress

(October 6, 2010)  Some technologies are so cool they make you do a double take. Case in point: robots being controlled by rat brains. Kevin Warwick, once a cyborg and still a researcher in cybernetics at the University of Reading, has been working on creating neural networks that can control machines. He and his team have taken the brain cells from rats, cultured them, and used them as the guidance control circuit for simple wheeled robots. Electrical impulses from the bot enter the batch of neurons, and responses from the cells are turned into commands for the device. The cells can form new connections, making the system a true learning machine. Warwick hasn't released any new videos of the rat brain robot for the past few years, but the three older clips we have for you below are still awesome. He and his competitors continue to move this technology forward - animal cyborgs are real.

The skills of these rat-robot hybrids are very basic at this point. Mainly the neuron control helps the robot to avoid walls. Yet that obstacle avoidance often shows clear improvement over time, demonstrating how networks of neurons can grant simple learning to the machines. Whenever I watch the robots in the videos below I have to do a quick reality check - these machines are being controlled by biological cells! It's simply amazing.

journal reference >>

October 1, 2010

How my predictions are faring — an update by Ray Kurzweil

How My Predictions Are Faring | Overview

(October 1, 2010)  In this essay I review the accuracy of my predictions going back a quarter of a century. Included herein is a discussion of my predictions from The Age of Intelligent Machines (which I wrote in the 1980s), all 147 predictions for 2009 in The Age of Spiritual Machines (which I wrote in the 1990s), plus others.

Perhaps my most important predictions are implicit in my exponential graphs. These trajectories have indeed continued on course and I discuss these updated graphs below.

My core thesis, which I call the law of accelerating returns, is that fundamental measures of information technology follow predictable and exponential trajectories, belying the conventional wisdom that you can’t predict the future.

There are still many things — which project, company or technical standard will prevail in the marketplace, or when peace will come to the Middle East — that remain unpredictable, but the underlying price/performance and capacity of information is nonetheless remarkably predictable. Surprisingly, these trends are unperturbed by conditions such as war or peace and prosperity or recession.

download full paper "How My Predictions Are Faring" (pdf) >>

September 22, 2010

Berkeley Lab Scientists Reveal Path to Protein Crystallization

(September 22, 2010)  By assembling a crystalline envelope around a cell, surface-layer (S-layer) proteins serve as the first point of contact between bacteria, extremophiles and other types of microbes  and their environment.  Now, scientists at the Molecular Foundry, a nanoscience user facility at Berkeley Lab, have used atomic force microscopy to image in real time how S-layer proteins form crystals in a cell-like environment. This direct observation of protein assembly could provide researchers with insight into how  microorganisms stave off antibiotics or lock carbon dioxide into minerals.

“Many proteins self-assemble into highly ordered structures that provide organisms with critical functions, such as cell adhesion to surfaces, transformation of CO2 into minerals, propagation of disease, and drug resistance,” said James DeYoreo, Deputy Director of the Molecular Foundry. “This work is the first to provide a direct molecular-level view of the assembly pathway in vitro. Once this knowledge can be extended to assembly in a living system, it may lead to strategies for capitalizing on or interfering with these functions.”

Unraveling the pathway for S-layer formation allows scientists to investigate how bacteria or other microbes negotiate interactions with their environment. DeYoreo and colleagues employed in situ atomic force microscopy—a probe technique used to study a crystal’s surface in its natural setting with atomic precision—to watch S-layer proteins assemble from solution onto a flat, biological membrane called a lipid bilayer. Unlike classical crystal growth, in which atoms form into ordered ‘seeds’ and grow in size, the team showed S-layer proteins form unstructured blobs on the bilayers before transforming into a crystalline structure over the course of minutes.

journal reference >>

Human-powered Ornithopter Becomes First Ever to Achieve Sustained Flight

(September 22, 2010)  Aviation history was made when the University of Toronto’s human-powered aircraft with flapping wings became the first of its kind to fly continuously.

The “Snowbird” performed its record-breaking flight on August 2 at the Great Lakes Gliding Club in Tottenham, Ont., witnessed by the vice-president (Canada) of the Fédération Aéronautique Internationale (FAI), the world-governing body for air sports and aeronautical world records. The official record claim was filed this month, and the FAI is expected to confirm the ornithopter’s world record at its meeting in October.

For centuries Engineers have attempted such a feat, ever since Leonardo da Vinci sketched the first human-powered ornithopter in 1485.

But under the power and piloting of Todd Reichert (EngSci OT5), an Engineering PhD candidate at the University of Toronto Institute for Aerospace Studies (UTIAS), the wing-flapping device sustained both altitude and airspeed for 19.3 seconds, and covered a distance of 145 metres at an average speed of 25.6 kilometres per hour.

read entire press release

September 13, 2010

Wheelchair Makes the Most of Brain Control

Artificial intelligence improves a wheelchair system that could give paralyzed people greater mobility.

(September 13, 2010)  A robotic wheelchair combines brain control with artificial intelligence to make it easier for people to maneuver it using only their thoughts. The approach, known as “shared control,” could help paralyzed people gain new mobility by turning crude brain signals into more complicated commands.

The wheelchair, developed by researchers at the Federal Institute of Technology in Lausanne, features software that can take a simple command like “go left” and assess the immediate area to figure out how to follow the command without hitting anything. The software can also understand when the driver wants to navigate to a particular object, like a table.

Several technologies allow patients to control computers, prosthetics, and other devices using signals captured from nerves, muscles, or the brain. Electroencephalography (EEG) has emerged as a promising way for paralyzed patients to control computers or wheelchairs. A user needs to wear a skullcap and undergo training for a few hours a day over about five days. Patients control the chair simply by imagining they are moving a part of the body. Thinking of moving the left hand tells the chair to turn left, for example. Commands can also be triggered by specific mental tasks, such as arithmetic.

read entire news >>

September 12, 2010

Stanford researchers' new high-sensitivity electronic skin can feel a fly's footsteps

(September 12, 2010)  Stanford researchers have developed an ultrasensitive, highly flexible, electronic sensor that can feel a touch as light as an alighting fly.  Manufactured in large sheets, the sensors could be used in artificial electronic skin for prosthetic limbs, robots, touch-screen displays, automobile safety and a range of medical applications.

The light, tickling tread of a pesky fly landing on your face may strike most of us as one of the most aggravating of life's small annoyances.  But for scientists working to develop pressure sensors for artificial skin for use on prosthetic limbs or robots, skin sensitive enough to feel the tickle of fly feet would be a huge advance.  Now Stanford researchers have built such a sensor.

By sandwiching a precisely molded, highly elastic rubber layer between two parallel electrodes, the team created an electronic sensor that can detect the slightest touch.

read entire press release

journal reference 

September 7, 2010


This photo shows two kinds of electrodes sitting atop a severely epileptic patient's brain
after part of his skull was removed temporarily.


(September 7, 2010)  In an early step toward letting severely paralyzed people speak with their thoughts, University of Utah researchers translated brain signals into words using two grids of 16 microelectrodes implanted beneath the skull but atop the brain.

"We have been able to decode spoken words using only signals from the brain with a device that has promise for long-term use in paralyzed patients who cannot now speak," says Bradley Greger, an assistant professor of bioengineering.

Because the method needs much more improvement and involves placing electrodes on the brain, he expects it will be a few years before clinical trials on paralyzed people who cannot speak due to so-called "locked-in syndrome."

The Journal of Neural Engineering's September issue is publishing Greger's study showing the feasibility of translating brain signals into computer-spoken words.

The University of Utah research team placed grids of tiny microelectrodes over speech centers in the brain of a volunteer with severe epileptic seizures. The man already had a craniotomy - temporary partial skull removal - so doctors could place larger, conventional electrodes to locate the source of his seizures and surgically stop them.

read entire press release

journal reference