Robotics and Artificial Intelligence are usually confused terms; where there is a thin line between the two. Traditional robots are pre-programmed humanoids or machines meant to do specific tasks irrespective of the environment they are placed in. Therefore, they do not show any intelligent behaviour. With a sprinkle of Artificial Intelligence, these robots have transformed into Artificially intelligent robots, which are now controlled by the AI programs making them capable of taking decisions when encountered by real world situations.
How has AI helped Robotics
You can look at Artificial intelligence loosely as General or narrow based on the level of task specificity. General AI could be the one from the movie Terminator or Matrix. It imparts wider knowledge and capabilities to machines that are almost similar to humans. However, general AI is way too far in the future and does not exist yet. Current robots are designed to assist humans in their day-to-day tasks in specific domains. For instance, the Roomba Vacuum cleaner is largely automated with very less human intervention. The cleaner can make decisions if it is confronted with choices such as, if the way ahead is blocked by a couch. The cleaner might decide to turn left because it has already vacuumed the carpet to the right.
Let’s have a look at some basic capabilities that Artificial Intelligence has imparted into robotics with the example of a self-driving car:
- Adding power of perception and reasoning: Novel sensors including Sonar sensors, Infrared sensors, Kinect sensors, and so on and their functionalities give robots good perception skills, using which they can self-adapt to any situations. Our self-driving car, with the help of these sensors takes the input data from the environment (such as identifying roadblocks, signals, objects (people), others cars) and labels it, transforms it into knowledge, and interprets it. It then modifies its behaviour based on the result of this perception and takes necessary actions.
- Learning process: With newer experiences such as heavy traffic, detour, and so on, the self-driving car is required to perceive and reason, in order to obtain conclusions. Here, the AI creates a learning process when similar experiences are repeated in order to store knowledge and speed up intelligent responses.
- Making correct decisions: With AI the driverless car gets the ability to prioritize actions such as taking another route in case of an accident or detour, or applying sudden brakes when a pedestrian or an object appears suddenly, and so on, in order to be safe and effective in the decisions that they make.
- Effective Human interaction: This is the most prominent capability that is enabled by Natural Language Processing (NLP). Driverless car accepts and understands the passenger commands with the help of the In-car voice commands based on NLP. Thus, the AI in the car understands the meaning of natural human language and readily responds to the query thrown at it. For instance, based on the destination address given by the passenger, the AI will drive along the fastest route to get there. NLP also helps in understanding human emotions and sentiments.
Real-world Applications of AI in Robotics
Sophia the humanoid is by far the best real-world amalgamation of Robotics and Artificial Intelligence. However, there other real-world use cases of AI in robotics with practical applications include:
- Self – supervised learning : This allows robots to create their own training examples for performance improvement. For instance, if the robot has to interpret long-range ambiguous sensor data, it uses apriori training and data that it captured from close range. This knowledge is later incorporated within the robots and within the optical devices that can detect and reject objects (dust and snow, for example). The robot is now capable of detecting obstacles and objects in rough terrain and in 3D-scene analysis and modeling vehicle dynamics. An example of self- supervised learning algorithm is, a road detection algorithm. The front-view monocular camera in the car uses road probabilistic distribution model (RPDM) and fuzzy support vector machines (FSVMs). This algorithm was designed at MIT for autonomous vehicles and other mobile on-road robots.
- Medical field : In the medical sphere, a collaboration through the Cal-MR: Center for Automation and Learning for Medical Robotics, between researchers at multiple universities and a network of physicians created Smart Tissue Autonomous Robot (STAR). Using innovations in autonomous learning and 3D sensing, STAR is able to stitch together ‘pig intestines’ (used instead of human tissue) with better precision and reliability than the best human surgeons. STAR is not a replacement for surgeons, but in future could remain on standby to handle emergencies and assist surgeons in complex surgical procedures. It would offer major benefits in performing similar types of delicate surgeries.
- Assistive Robots : these are robots that sense, process sensory information, and perform actions that benefit not only the general public but also people with disabilities, or senior citizens. For instance, Bosch’s driving assistant systems are equipped with radar sensors and video cameras, allowing them to detect these road users even in complex traffic situations. Another example is, the MICO robotic arm, which uses Kinect sensor.
Challenges in adopting AI in Robotics
Having an AI robot means lesser pre-programming, replacement of manpower, and so on. There is always a fear that robots may outperform humans in decision making and other intellect tasks. However, one has to take risks to explore what this partnership could lead to. It is obvious that casting an AI environment in robotics is not a cakewalk and there are challenges that experts might face. Some of them include,
- Legal aspects: After all robots are machines. What if something goes wrong? Who would be liable? One way to mitigate bad outcomes is by developing extensive testing protocols for the design of AI algorithms, improved cybersecurity protections, and input validation standards. This would require AI experts who not only have a deeper understanding of the technologies, but also experts from other disciplines such as law, social sciences, economics and more.
- Getting used to an automated environment: While it was necessary for traditional robots to be pre-programmed, with AI this will change to a certain extent and experts would just have to feed in the initial algorithms and further changes would be adapted by the robot by self-learning. AI is feared for having the capacity to take over jobs and automate many processes. Hence, broad acceptance of the new technology is required and a careful and managed transition of workers should be carried out.
- Quick learning with less samples: The AI systems within the robots should assist them in learning quickly even when the supply of data is limited, unlike deep learning which requires hoards of data to formulate an output.
The AI-robotics fortune
The future for this partnership is bright as robots become more self-dependant and might as well assist humans in their decision making. However, all of this seems like a work of fiction for now. At present, we mostly have semi-supervised learning which requires a certain human touch for essential functioning of AI systems. Unsupervised learning, one shot learning, meta-learning techniques are also creeping in, promising machines that would not require human intervention or guidance any more. Robotics manufacturers such as Silicon Valley Robotics, Mayfield robotics and so on together with auto-manufacturers such as Toyota, BMW are on a path to create autonomous vehicles, which implies that AI is becoming a priority investment for many.