Artificial intelligence has opened up a new generation of robotics, Robotics 2.0. The biggest change is the automation from the original manual programming, towards a truly autonomous learning. This article will try to unravel the mystery of artificial intelligence (AI) applications, help readers understand how AI robots will affect our future, and clarify topics that we often hear but do n’t know much about or even fully understand.
This article is the first in a series of "Robotics 2.0" series, which describes the impact of robotics and AI on various industries and future work. We will discuss how AI will unlock the potential of robotics, the challenges and opportunities of this new technology, and how all of this will affect our productivity, employment, and even daily life. At the moment when artificial intelligence is being hyped, iJUNCO hopes to encourage more constructive and comprehensive discussions through these articles.
Redefining Robots: Demystifying the Next Generation of AI Robots Robotics 2.0
When it comes to robots, we always have all kinds of imaginations: from Pepper, a social robot from Softbank (Softbank Group), Atlas, a Boston Dynamics robot that can easily flip back, and robots from the "Terminator" series of movies Killer, come to life in the TV series "West World", lifelike realistic robotic characters.
We often hear polarized views; some people tend to overestimate the ability of robots to imitate humans, thinking that machines will eventually replace humans, while others are too pessimistic about the potential of new research and technology.
In the past year, many friends in the entrepreneurial, technological, and new entrepreneurial world have asked me what are the "practical" developments in AI, especially in the areas of deep reinforcement learning and robotics?
What is most curious is:
What is the difference between an AI robot and a traditional robot? Does AI robots really have the potential to disrupt major industries? What are its capabilities and limitations?
It seems that it is unexpectedly difficult to understand the current technological progress and industrial structure, let alone to predict the future. With this article, I try to unravel the mystery of artificial intelligence applied to machines, to clarify a topic that we often hear, but do n’t know much about or understand at all.
The basic question that must be answered first: What is AI-enabled Robotics? What makes them unique?
Robot evolution: from automation to autonomy
"Machine learning solves various problems that were" difficult to computers but easy for humans ", or in a more understandable way, it solves the problem that" humans have difficulty understanding computers. "
—Benedict Evans, Anhu Ventures (a16z)
The biggest achievement in the field of robotics created by AI is that from the original "automation" (engineers write rules by programming and let robots abide by it), they have moved towards true "autonomous learning."
If the robot only needs to deal with one thing, then whether it has artificial intelligence or not, the difference is not visible; however, if the robot needs to deal with various tasks or respond to changes in humans and the environment, it needs a certain degree Can be competent.
Let's borrow the following definitions of self-driving cars to explain the evolution of robots:
0Level 0 — no automation: humans operate the machine without the participation of robots. (The general definition of a robot is a programmable humanoid machine capable of performing complex actions on its own).
Level 1 — Single Automated Operation: A single function is automated, but does not use environmental information. This is the current state of robotics in automation and manufacturing. Through program editing, the robot can repeatedly perform specific tasks with high accuracy and speed; but until now, most practical robots have not been able to sense or respond to changes in the environment.
Level 2-Partial Automation: Assists the machine to make decisions through specific functions entered by the environment. For example, some robots recognize and respond to different objects through visual sensors: However, traditional computer vision requires pre-registration and clear instructions for each object, and the robot still lacks the ability to handle changes, unexpected conditions, or new ones. Object capabilities.
Level 3 — Conditional autonomy: The machine controls all environmental monitoring activities, but still requires human inspection attention and (instant) intervention.
Level 4-High autonomy: In some cases, or in a defined area, it is completely autonomous.
Level 5-Full Autonomy: Full autonomy in any situation without human intervention.
Which level of autonomy are we in now?
Most robots in factories are controlled through open loop or non-feedback methods. This means that their operation and sensor feedback are independent and independent of each other (level 1).
A small number of robots in the factory will adjust the operation according to the sensor feedback (level 2); in addition, there are cobots, whose operation is simpler and safer, so they can work with humans. However, compared to industrial robots, their accuracy and speed are dwarfed. In addition, although the programming of collaborative robots is relatively simple, they still do not have autonomous learning; whenever the work content or environment changes, it is necessary for humans to manually guide the collaborative robot to make adjustments, or to rewrite programs and machines. In itself can not learn from one another, elasticity.
Deep Learning and Reinforcement Learning can help robots process various objects autonomously, and minimize human intervention.
We have started to see some pilot projects using AI robots (level 3/4), such as "Warehousing and Picking" is a good example. In a freight warehouse, employees need to put millions of different products into boxes based on customer needs. Traditional computer vision cannot handle such a wide range of items, because each item needs to be registered in advance, and the robot must be programmed first in response to the actions that the robot needs to take.
However, due to deep learning and reinforcement learning technology, robots can begin to learn to process various objects autonomously, reducing human intervention. During the learning process of the robot, there may be some goods that it has not encountered before, but need human assistance or demonstration (level 3). However, as the robot collects more information and learns from trial and error (level 4), the algorithm will also improve and move towards full autonomy.
Just like the self-driving car industry, robot startups have adopted different strategies: some companies are optimistic about the cooperation between humans and robots, focusing on the development of level 3; some companies believe that the machine will eventually achieve true full autonomy, So they skipped level 3 and looked directly at level 4, or even level 5.
This is one of the reasons why it is difficult for us to assess the degree of industry autonomy now.
It is possible for a startup to claim to be dedicated to researching an autonomous system at level 3/4, but it is actually a large number of outsourced, manual remote control of the machine. Under the premise of being unable to understand the development level of its internal software and AI products, the difference between remote control and autonomous learning cannot be seen from the appearance of the machine alone. On the other hand, start-up companies with a target of level 4/5, if they fail to achieve the desired results in a short time, may reduce the customer's willingness to adopt in the early stage and make the data collection in the early stage more difficult.
In the second half of this article, I will further discuss the different business strategy thinking of startups.
The rise of AI robots: the scope of application is no longer limited to warehouse management
Interestingly, the potential of artificial intelligence applications of robots is even higher than that of unmanned vehicles. Because robots have a variety of applications and industries, in a sense, robots should be easier to achieve level 4 goals than cars.
AI robot arms are beginning to be adopted in warehouses, which is the best example. Because the warehouse is a "semi-controlled" environment, the uncertainty is relatively low. In addition, although the picking operation is critical, it can tolerate errors.
As for autonomous home-type or surgical robots, it will not be realized until a more distant future; after all, the relevant environment has more variables, and some tasks have irreversibility and a certain degree of danger. However, it is foreseeable that with the advancement of technology precision, accuracy and reliability, we will see more industries adopting AI robots.
Many industries have not yet used robotic arms, mainly due to the limitations of traditional robots and computer vision.
At present, there are only about 3 million robot arms in the world, most of which are engaged in tasks such as handling, welding, and assembly. So far, with the exception of the automotive and electronics industries, almost no storage robots have begun to use robotic arms; the main reason is the aforementioned limitations of traditional robots and computer vision.
In the next few decades, with the potential of robots released by deep learning (DL), reinforcement learning (RL), and cloud technologies, we will see the explosive growth brought by the new generation of robots and change the industrial landscape. . Among them, what are the growth opportunities for AI robots? What are the different approaches and business models adopted by start-ups and existing players to respond to the changes brought about by new technologies?
Industry profile of new generation AI robot startups
Next, I will introduce a few sample companies in different market segments. Such a brief introduction, of course, cannot cover the status of all enterprises; you are welcome to provide other companies and application cases to make the content more complete together.
AI / Robotics startup market overview
Research on the industrial structure of new-generation robots and new industries, we can see two completely different business models: vertical applications and horizontal applications.
1. Vertical application
The first is vertical applications: most of the local startups in Silicon Valley focus on developing solutions for specific vertical markets; such as e-commerce logistics, manufacturing, agriculture, and so on.
This method of providing a complete solution is quite reasonable, after all, the related technology is still in its infancy; the company does not rely on others to provide key modules or components, but builds end-to-end solutions. This vertically integrated solution can get to market faster and ensure that the company has a more complete picture of end-user cases and performance.
However, it is not so easy to find application cases such as "warehouse sorting" that are relatively easy to implement. Warehouse picking is a relatively simple task, customers' investment willingness and technical feasibility are high, and almost every warehouse has the same picking needs.
But in other industries (such as manufacturing), assembly tasks may vary from factory to factory; in addition, tasks performed in manufacturing also require higher accuracy and speed, which is technically relatively difficult.
At present, robots with learning capabilities cannot reach the same accuracy as closed-loop robots.
Although machine learning can keep robots up-to-date, currently robots that operate through machine learning cannot reach the same accuracy as closed-loop robots, because it needs to accumulate experience of trial and error, learn from mistakes, and gradually improve.
This point explains why startups such as Mujin and CapSen Robots did not use deep reinforcement learning and instead chose to use traditional computer vision.
However, traditional computer vision requires every object to be registered in advance, after all, it lacks the ability to expand and adapt to changes. Once deep reinforcement learning (DRL) reaches the performance threshold and gradually becomes the mainstream of the industry, this traditional method will eventually become useless.
Another problem with these startups is that their value is often overvalued. We often see startups raising tens of millions of dollars in Silicon Valley without being able to promise to create any truly specific revenue stream.
For entrepreneurs, it is no easy task to "paint" the deep future of deep reinforcement learning; but the reality is that we need years to achieve such results. Although these companies are still a long way from making profits, Silicon Valley's venture capital is willing to continue to bet on these talented and technologically advanced teams.
2. Horizontal application
On the other hand, horizontal application is a more practical but rarer model. We can simply simplify robotics into three parts: sensing (input), processing, and driving (output); in addition, there are development tools. (The term "processing" used here also covers controllers, machine learning, operating systems, robot modules, etc., and various other items that are not sensing or driving.)
I think this area will have the most growth potential in the future. For robot users, a fragmented and fragmented market is a tricky issue; because all robot manufacturers have their own languages and interfaces, it is difficult for system integrators and end users to connect robots with Related systems are integrated.
As the industry matures, more and more robots are used in fields other than automotive and electronics factories; therefore, we need more standard operating systems, communication protocols, and interfaces to improve efficiency and shorten time to market.
For example, several start-up companies in Boston, USA are researching related modules; for example, safety modules developed by Veo Robotics can make industrial robots work more safely with humans; Realtime Robotics provides accelerated robotic arms Path solution.
Contact: Manager Xu
Phone: 13907330718
Tel: 0731-22222718
Email: hniatcom@163.com
Add: Room 603, 6th Floor, Shifting Room, No. 2, Orbit Zhigu, No. 79 Liancheng Road, Shifeng District, Zhuzhou City, Hunan Province