Nov 212015

MIT Tech Review – Will Knight – November 2015




baxter grasping robotGrabbing a pen or pair of sunglasses might be effortless for you or me, but it’s fiendishly difficult for a robot, especially if the object in question is unfamiliar or positioned awkwardly.

Practice makes perfect, though, as one robot is proving. It is teaching itself to grasp all sorts of objects through hours of repetition. The robot uses different cameras and infrared sensors to look at an unfamiliar object from various angles before attempting to pick it up. Then it does so using several different grasps, shaking the object to make sure it is held securely. It may take dozens of tries for the robot to find the right grasp, and dozens more for it to make sure an object won’t slip.

That might seem like a tedious process, but once the robot has learned how to pick something up, it can share that knowledge with other robots that have the same sensors and grippers. The researchers behind the effort eventually hope to have hundreds of robots learn collectively how to grasp a million different things.

The work was done by Stefanie Tellex, an assistant professor at Brown University, together with one of her graduate students, John Oberlin. They used a two-armed industrial robot called Baxter, made by the Boston-based company Rethink Robotics.

At the Northeast Robotics Colloquium, an event held at Worcester Polytechnic Institute this month, Oberlin demonstrated the robot’s gripping abilities to members of the public.

Enabling robots to manipulate objects more easily is one of the big challenges in robotics today, and it could have major industrial significance (see “Shelf-Picking Robots Will Vie for Amazon Prize”).

Rosie_grasping_with_clutterTellex says robotics researchers are increasingly looking for more efficient ways of training robots to perform tasks such as manipulation. “We have powerful algorithms now—such as deep learning—that can learn from large data sets, but these algorithms require data,” she says. “Robot practice is a way to acquire the data that a robot needs for learning to robustly manipulate objects.”

Tellex also notes that there are around 300 Baxter robots in various research labs around the world today. If each of those robots were to use both arms to examine new objects, she says, it would be possible for them to learn to grasp a million objects in 11 days. “By having robots share what they’ve learned, it’s possible to increase the speed of data collection by orders of magnitude,” she says.

To grasp each object, the Brown researchers’ robot scans it from various angles using one of the cameras in its arms and the infrared sensors on its body. This allows it to identify possible locations at which to grasp. The researchers used a mathematical technique to optimize the process of practicing different grips. With this technique, the team’s Baxter robot picked up objects as much as 75 percent more reliably than it did using its regular software. The information acquired for each object—the images, the 3-D scans, and the correct grip—is encoded in a format that allows it to be shared online.

Other groups are developing methods to allow robots to learn to perform various tasks, including grasping. One of the most promising ways to achieve this is deep learning using so-called neural networks, which are simulations loosely modeled on the way nerves in the brain process information and learn (see “Robot Toddler Learns to Stand by ‘Imagining’ How to Do It”).

Although humans acquire an ability to grasp through learning, a child doesn’t need to spend so much time handling different objects, and he or she can use previous experience to figure out very quickly how to pick up a new object. Tellex says the ultimate goal of her project is to give robots similar abilities. “Our long-term aim is to use this data to generalize to novel objects,” she says.

Aug 072015

ZME Science – August 7th, 2015





The human brain is arguably the most complex structure in the Universe. To unlock its secrets, scientists all over the world are mapping and simulating parts of the human brain. The latest breakthrough comes from Japan where scientists using the K supercomputer, the fourth most powerful in world, accurately mapped one second’s worth of brain activity. It took the computer 40 minutes to undertake this task, for one percent of the brain activity!

A lot of people liken the brain to a computer. In many respects they’re right, but by no means can this comparison be made using conventional computers. The human brain has a huge number of synapses. Each of the 1011 (one hundred billion) neurons has on average 7,000 synaptic connections to other neurons. These connections are made at near-light speed and, most importantly, in parallel. It’s also very efficient, too. The adult human brain needs about 20 Watts of power. A conventional machine that could simulate the entire human brain would need to have an entire river’s course bent just to cool it!

The K computer, with over 700,000 processor cores and 1.4 million GB of RAM, was used as part of a joint venture between Japanese research group RIKEN and German research group Forschungszentrum Jülich. So far, their research is the most advance of its kind. In the race to map the brain are also involved groups from the BRAIN initiative (USA), The Human Brain Project (EU), and Brainnetome (China).

The Japanese group’s simulation was for 1% of human brain activity. To map the entire brain, an exascale machine would be needed, one capable of performing a quintillion floating point operations per second. Intel says that by 2018 it will release such a supercomputer.

“If petascale computers like the K computer are capable of representing one per cent of the network of a human brain today, then we know that simulating the whole brain at the level of the individual nerve cell and its synapses will be possible with exascale computers – hopefully available within the next decade,” Markus Diesmann, one of the scientists involved, told the Daily Telegraph.

Aug 072015

Defense Systems – August 7th, 2015





The Army is currently developing a two-way handheld translation device that can be used by soldiers in the field when deployed in foreign countries, starting with deployments in Africa.

The SQ.410 Translation System made by VoxTec is currently programmed with nine languages and does not require a cell network or an Internet connection the Army said in a release. The device, which is designed to be worn on the chest for hands-free operation, will repeat a soldier’s spoken words in English and display them on the screen, so the soldier can be sure the voice recognition is accurate. The system then provides written and spoken translations in the other language, the Army said. It also can be used to record conversations.

The SQ.410 was tested in mid-July in Italy at the U.S. Army Garrison Vicenza and received positive marks from those involved, the Army said.

The Army officials are heavily investing in Africa, a land with rich natural resources, several U.S. partners and a variety of languages. Working effectively with African partners given the continent’s importance is critical for the future of interoperability, especially as threats from insurgent groups in Somalia, Nigeria, the Sahel and the Maghreb persist. The Army conducts training in about 20 of the continent’s 54 countries.

“We believe Africa is a future frontier for technology in the next 10 to 15 years,” said Maj. Eddie Strimel, the Field Assistance in Science and Technology, or FAST, advisor assigned to U.S. Army Africa, noting that communicating in French dialects—which are common and varied in Africa—is becoming critical. “French is a priority for us. If we can get these dialects developed with this type of system, it will benefit the Army, Air Force and Marines down the road.”

Researchers are working on improvements, particularly with regard to the various dialects. Speech translation software for French generally deals with standard European speech. “How well it works for communication tasks specific to U.S. teams working with African partners is just now being examined,” said Dr. Stephen LaRocca, a computer scientist and team chief of the Multilingual Computing Branch at the Army Research Laboratory, who  provided technical expertise during testing.


“From a scientific perspective, we need to know how sensitive the technology is to the different accents of the many diverse French-speaking African language communities,” LaRocca said.

Additionally, the system is not yet capable of translating military jargon, though this is currently just a minor issue as the system provides critical communications capabilities with partners in the absence of translators.

The Army’s Africa component is working with the Army Rapid Equipping Force to purchase five translators to continue testing and data collection to refine the software. And the Army doesn’t expect to stop with Africa, as commanders around the world have expressed interest in the translators.

Aug 072015

Defense Systems – August 7th, 2015



robot warrior



Predicting the future isn’t easy, as we’ve seen from how different 1968’s “2001: A Space Odyssey” was from the actual 2001 (Keir Dullea interacted with HAL 9000, we interacted with Clippy the Microsoft paper clip) and 2015 differs from the way it was depicted in 1989’s “Back to the Future Part II,” although that film did get more than a few things right.

But for the military, projecting what the battlefield of the future will look like is a key part of its planning, and a spring workshop sponsored by the Army Research Lab, with participants from the military, media, academia and industry, looked forward to what it could be in 2050. Some of the things they foresee in their recently released report: robots, humans with superhero-like powers, extreme precision targeting and even an actual force field.

Rise of the robots

One consensus from the report is that robot-human interaction will continue to increase. Even in technologies being developed today, such as a “sentient data” solution that will allow soldiers to transmit data to other soldiers and electronic systems without conscious thought through an implant, this idea of a “man-machine partnership” is prevalent.

“An associated major aspect of this battlefield development will be the seamless integration of human and machine decision making,” the report states. “As a result, battle rhythm will increase to the point that, in many instances, humans will no longer be able to be ‘in the loop,’ but will instead need to operate ‘on the loop’.” The difference between the two is that human decisions are required for certain tasks, allowing for humans to exercise positive control when “in the loop.” Conversely, humans “on the loop” can merely watch what’s going on and can act either in anticipation of automated decisions or after such decisions have been made.


While the report notes that robots will perform more battlefield tasks in order to keep humans out of harm’s way – just as unmanned aircraft currently do – the humans who are on the battlefield will have augmented abilities, to the point of seeming superhuman. Humans will be “physically and mentally augmented with enhanced capabilities that improve their ability to sense their environment, make sense of their environment and interact with one another,” according to the report.


heads up display

Fine-tuned targeting

Another prediction is micro-targeting, which will take precision strikes to new levels, the report states. “For example, instead of being able to identify and engage a particular building or moving vehicle while minimizing collateral damage, the concept of micro-targeting involves the identification and surgical engagement of specific individuals employing either kinetic or non-kinetic means.”

According to workshop participants, this trend is likely given the ability to fuse data from a variety of sources. One example could be the Air Force Research Lab’s CATALiST program, a software uses a semantic wiki intelligence solution for more accurate targeting by providing commanders with a variety of options.

Unlike a typical wikis, such as Wikipedia, a semantic wiki understands data collected in a particular set and determines how pages are constructed on the fly, Mike Gilger, director Intelligence & Software Solutions for CATALiST contractor Modus Operandi, told Defense Systems. “So then instead of your data aging over time, and becoming worthless, you’re always looking at the greatest, latest data that exists within your knowledge store that is continuously updated, either via the wiki or through other automated systems,” he said.

Programs like CATALiST provide commanders with full-spectrum targeting, offering “the least expensive option to perform the greatest value,” Gilger said. Commanders can “determine the best way to apply all the resources we have to bear, whether it’s a cyber attack, whether it’s an EMP pulse to just go ahead and temporarily take down some electrical systems, potentially even cutting the electrical line that will go and take out these radars.” Having those options could limit the damage to surrounding civilian populations during an attack, or during hostage rescue missions, for example.



Along with working more closely with humans, robots are likely to team up with each other, the report notes, albeit without going into much detail. Military researchers have already begun developing and requesting similar solutions such as the Defense Advanced Research Projects Agency’s Collaborative Operations in Denied Environment (CODE) program, which seeks to make improvements in collaborative autonomy among unmanned systems. DARPA wants to enable UAVs to “to find, track, identify and engage targets, all under the command of a single human mission supervisor,” DARPA program manager Jean-Charles Ledé stated. The Navy also has developed “swarming” technology for autonomous and semiautonomous boats and small UAVs.

Lasers and force fields

The military already is pursuing the development of lasers and other directed energy weapons. But another “Star Trek” idea also could take hold, as the workshop participants said that force fields will become a critical aspect of defense, particularly as defense against directed energy weapons. Directed energy weapons, workshop participants noted, include “high power micro and millimeter wave, and lasers of various kinds (solid-state, chemical, fiber), both airborne and ground.” These technologies have experienced significant setbacks since inception dating back to the 1960s, however, but they have begun to demonstrate promise.

Boeing, in fact, recently  patented a force field concept that uses electromagnetic energy from lasers, microwave generators or electric arc to repel energy by heating air to intercept shockwaves and attenuate energy density before it reaches its intended target.

Jul 272015

The Guardian – July 27th, 2015



cybernetic organism warning 2


The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking along with 1,000 AI and robotics researchers.  The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

The authors argue that AI can be used to make the battlefield a safer place for military personnel, but that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.  Should one military power start developing systems capable of selecting targets and operating autonomously without direct human control, it would start an arms race similar to the one for the atom bomb, the authors argue. Unlike nuclear weapons, however, AI requires no specific hard-to-create materials and will be difficult to monitor.

“The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” said the authors.  Toby Walsh, professor of AI at the University of New South Wales said: “We need to make a decision today that will shape our future and determine whether we follow a path of good. We support the call by a number of different humanitarian organisations for a UN ban on offensive autonomous weapons, similar to the recent ban on blinding lasers.”

Musk and Hawking have warned that AI is “our biggest existential threat” and that the development of full AI could “spell the end of the human race”. But others, including Wozniak have recently changed their minds on AI, with the Apple co-founder saying that robots would be good for humans, making them like the “family pet and taken care of all the time”.

At a UN conference in Geneva in April discussing the future of weaponry, including so-called “killer robots”, the UK opposed a ban on the development of autonomous weapons, despite calls from various pressure groups, including the Campaign to Stop Killer Robots.