Several days ago, I came across a traffic circle in the heart of the German colony in Haifa which the locals call "the grinder." Traffic reaches the circle comes from four different directions, and there are two lanes in each direction. In one part of the circle is a traffic light with pedestrian crossings on the sides and the famous Metronit bus system passing nearby. There is no such thing as the right of way in this urban combat zone, and on this particular day, there was a special delight in the form of garbage on one of the lanes, which forced drivers to zigzag when they were passing other cars. To make a long story short, it was a rather typical day in Israeli transportation hell.
In order to pass through this roundabout, the drivers have to drive in a way that is not for the timid: you have to look in three directions simultaneously while making eye contact with the drivers crossing the circle in order to assess their determination/sanity, anticipate when the traffic lights will change, ignore the constant honking, and wade in with the knowledge that you may have to brake, accelerate, or climb onto the traffic island in the middle at any instant. Now imagine yourselves sitting in an autonomous car in which the computer is supposed to bring the car to the other side of the roundabout by maneuvering between non-autonomous vehicles.
The key to an autonomous vehicle: artificial intelligence
In attempting to translate intuitive "driving policy" acquired from human experience into the language of algorithms and programming, in which the autonomous vehicle's computer "sees" and "thinks," you run head on into a technological barrier. Autonomous vehicle engineers and developers of smart sensors and components now know how to detect vehicles, pedestrians, road signs, and road obstacles under different road conditions, but merging data from dozens of sensors in order to predict the traffic makes it necessary to process a massive amount of visual information in real time. Estimates are that each autonomous vehicle will create 4,000 gigabytes of visual information on a typical day's driving.
The solution now being adopted by the auto industry is to use machine vision and artificial intelligence (AI) in order to give the sensors and data processors in the vehicle capabilities similar to those of human vision - from absorption of a massive stream of information to sorting, processing, and translating the information into action. This technology is already attracting a multi-billion dollar market with dozens of startups, and is now changing the way work is done in espionage, medicine, marketing, and so on. The autonomous vehicle, however, poses a serious challenge, due to the complex interactions of the road and the need for a "driving policy" that I mentioned earlier.
The currently prevailing approach to designing AI systems for an autonomous vehicle is "supervised study." In other words, if you want your system to be able to independently detect a road sign or marking, you have to feed into it a very large amount of catalogued examples of that road sign under various lighting and climate conditions, so that it will issue alerts about them by honking (in passive systems) or respond to them by independent braking, for example.
This method, however, requires a great deal of time and "learning" resources, and it is far from perfect. Its detection and adaptation ability is only as good as the information fed into it during learning, and the result is liable to be false alarms or detection failures. When a traffic light, road sign, or oncoming vehicle is involved, 99% accuracy is just not good enough.
What the auto industry is really looking for now are AI systems with self-learning capability - what is sometimes also called "unsupervised learning" - systems that can perform the process from the bottom up, i.e. independently achieve detection and conclusions according to the data gathered on the road, rather than from the top down, i.e. through preprogrammed examples.
This is the holy grail of the AI sector, and it is a distinct possibility that the search for it will lead the auto industry to Israeli company Cortica, which has developed AI technology with self-learning capability.
Ambitious goal, proven technology
Cortica is not a typical startup. The company was founded in 2007 on the basis of research conducted in 2003-2007 for the purpose of "hacking" into the human brain and translating its method of operation into algorithms. The company calls this "the borderline between brain science and AI" or "reverse engineering" of the human brain. It has produced an algorithm that imitates the way information is processed in the brain, and enables the system to learn without supervision - like children independently learning about the world around them. If that sounds very mysterious to our readers, they are not the only ones. Up until recently, many people in the sector also asserted that such capabilities could not be achieved with the existing technology.
The goal of the initial research was to understand what the basic calculation unit of the cerebral cortex is, how it learns, and what is the simplest mathematical model that can reproduce these capabilities in a computer. Since some of the founders are IDF Intelligence Unit 8200 alumni, the company realized the security potential of the findings.
Since it was founded, Cortica has remained largely below the media radar screens, although sources in the sector are aware of its considerable business and technological accomplishments in the security field. These applications include self-analysis of large databases and activity in the field of unmanned aerial vehicles. Cortica's technology can independently and rapidly go over the many hours of video that create such tools and produce insights from them, such as location of suspects in population centers and monitoring objects.
The company does not like to talk about financial matters. According to the Crunchbase website, however, it has raised nearly $70 million to date in three rounds, the most recent of which was in December last year, when it raised $30 million in a round led by major investment companies from Hong Kong and Russia.
The company started working in the autonomous vehicle sector last year, having developed successful Internet and mobile communications products in recent years, mainly in the Asian markets.
"After the last large-scale round, we considered new development instruments," says Cortica cofounder and CEO Igal Raichelgauz. "We entered the autonomous vehicle sector because of the enormous quantity of information it is expected to generate - almost 4,000 gigabytes per vehicle per day. In order to handle information on this scale, you need autonomous machines with an autonomous brain that can collected basic information from its surroundings, identify it, and accumulate 'experience' like a human driver. We therefore concluded that the auto industry is much more relevant."
The company makes very impressive claims on its website; some will call them overambitious. The company asserts that it has developed "artificial intelligence that can understand images at a human level" and "Even the most complex technologies could not understand the visual world in the same way that people do - until now. By utilizing brain research in order to create an AI system with self-learning capability, Cortica has developed the most effective computer vision system ever seen."
The company's claims are backed with a store of nearly 260 patents requested for its technology. Over 50 of these have already been accepted, and the rest are in the approval process. According to a report by Insight, this is the largest number of patents by a single AI company. Assuming that this technology is successfully translated to the autonomous vehicle sector, it will be a very significant breakthrough, and the vehicle and chip industries will be willing to give an arm and a leg to obtain exclusivity for it. Raichelgauz says that what his company has is not a theoretical solution; it is a product based on mature and available technology. "We already have a solution for the auto industry at the level of a product that works," Raichelgauz says. "The system is indifferent to the hardware infrastructure on which it operates, and can handle information to the same degree from a variety of sensors, including radar, camera sensors, supersonic sensors, and others." Raichelgauz reveals that the product is already being tested by three major auto industry companies, two of which are including it in their computer platforms. The technology is being integrated in the development plan for autonomous vehicles scheduled to be built in the next two or three years. Cortica's product builds an environmental model (a computerized status report) in real time and facilitates prediction for the purpose of real-time responses. The technology also makes it possible to fuse information from a large number of sensors into one digital signature.
Predicting the road
"The usual solution today in AI is deep learning," Raichelgauz says. "This is a slow process. In the system developed by Cortica, the 'brain' learns the rules of the game by itself, labels the information, and reaches conclusions, like children learning to understand their surroundings. What is special about Cortica's technology is is not only its self-learning capability, but also that it is transparent, transferrable, and verifiable." This may be the right place to interject that due to the complexity of the AI processes, a great many commercial solutions are now being offered through the "black box method": data are fed into the system and the processed results are received, without any ability to monitor and understand exactly how the system reached those conclusions. This is a considerable problem for systems that are responsible for people's lives and safety, and regulators worldwide, e.g. in the EU, are now having to enact regulations in the matter.
Typical deep learning systems also have trouble accumulating knowledge and transmitting it to systems, which forces those operating new systems "to study them from scratch." Another weak point of the common deep learning methods is the difficulty in predicting the system's performance. After all, in practice, the performance of AI is likely to be completely different than the theoretical projection.
Cortica's system, on the other hand, creates digital "signatures" that represent known concepts from the information world. A signature can indicate, for example, some kind of systematic connection when the system is being tested and a sample link (image and object) that explain the reason for the failure. The "experience" accumulated by the system can be transmitted from one vehicle to another, and between systems. "The system makes it possible to monitor the AI process of 'thinking' and drawing conclusions, and to assess it according to supervisory and performance criteria," Raichelgauz says.
The solution that the company is offering for an autonomous vehicle is multi-system. Its system accumulates a profound understanding of the vehicle's immediate surroundings, and simultaneously identifies over 10,000 generic objects, such as vehicles and trucks, pedestrians, traffic situations, and more. For example, the resolution is good enough to identify a pedestrian on a hoverboard or one holding a mobile telephone.
The system can decode complicated situations in their context, and create a system of probabilities about the observed object's next action, including additional objects entering the frame. Try to imagine a ball rolling onto the road when the car predicts that there is a possibility that a child will then also appear on the given route, and simultaneously assess the response to the situation by the cars in front and on the sides, which are moving at different speeds. In addition, the system also maps the surroundings at a high level, with a constant monitoring of changes in the surrounding infrastructure and objects and potential for using them for mapping and information gathering - like the project that Mobileye is now trying to promote globally. Cortica's technology is based on digital signatures that generically represent all the sensor information coming from the real world. One of the main uses of signatures is for mapping the space and precise location of the objects in it, including an extremely accurate location of the vehicle itself. While the existing mapping technologies, such as that of Mobileye, are based on anchors – predefined objects such as traffic lights and road signs, Cortica's signature utilizes every pixel in the image for anchoring and mapping. For example, in driving on a dirt road, Cortica's signatures are put next to boulders and specific road formations, making it possible to maintain precise mapping. Because Cortica's technology is indifferent to the type of sensors and the type of hardware, it complements Mobileye's technology, instead of competing with it.
Cortica's technology can provide the auto industry and other industries with an important and financially significant breakthrough. Taking into account the red-hot market and the values of companies in it, the numbers are very large.
In contrast to many companies in the sector that we recently talked to, Cortica appears unimpressed by the large amounts, and is not making an exit or IPO a top priority. "We have no interest in an IPO or large-scale financing round right now," Raichelgauz declares. "At the moment, we're looking for strategic partnerships with important players, preferably tier-1, who will give us the ability to utilize the technology in products that will be included in vehicles." Meanwhile, the company is making an effort to recruit dozens of employees from various areas, while its current employees include experts in various fields, including physics, mathematics, brain science, and AI specialists.
Published by Globes [online], Israel Business News - www.globes-online.com - on December 5, 2017
© Copyright of Globes Publisher Itonut (1983) Ltd. 2017