The developments in agricultural robotics, machine vision, and AI will drive a deep and far-reaching transformation of the way farming is carried out. Yes, the fleet sizes and the total area covered by new robots are still vanishingly small compared to the global agricultural industry. However, this should not lull the players into a false sense of security because the ground is slowly but surely shifting. Robotics and AI are enabling a revolution in affordable precision, which will eventually upend familiar norms in agrochemical supply, in agricultural machine design, and in farming practices.
This development frontier has the wind in its sails, pushed by rapidly advancing and sustainable hardware and software technology trends and pulled by structural and growing challenges and needs. In the IDTechEx assessment, these technology developments can no longer be dismissed as gimmicks or too futuristic. They are here to stay and will only grow in significance. Indeed, all players in the agricultural value chain will need to develop a strategy today to benefit from, or at least to safeguard against, this transformative trend.
This article provides an extended discussion of emerging small and mid-sized autonomous precision robots as well as intelligent robotic implements. These are just two classes of robots amongst many in the agricultural robot space. Subsequent articles will cover autonomous tractors, fresh fruit robotic picking, autonomous spraying, and automatic milking. For each category, IDTechEx assess the technical and commercial development status and analyse remaining challenges as well as future development trends and roadmaps.
This article is based on the new IDTechEx report “Agricultural Robots, Drones, and AI: 2020-2040: Technologies, Markets, and Players“. This report provides a comprehensive analysis of all the hardware and artificial intelligence technology trends, helping assess and contextualize the current development status and to envisage a realistic future technical and market roadmap.
The IDTechEx report analyses all the emerging product types including autonomous robots taking plant-specific precision action, intelligent vision-enabled robotic implements, diverse robotic fresh fruit harvesters, highly automated and autonomous tractors and high-power farm vehicles, drones, automatic milking, and so on. It provides interview-based company profiles and analysis of all the key companies and innovators. Finally, the report offers short- and long-term market forecasts, considering the addressable market size in area/tons and value, penetration rates, annual robot sales, accumulated fleet size, total RaaS (robot as a service) revenue projections and so on.
Autonomous Ultraprecision Robots
Examples of these products or robots are shown below. These are often small or mid-sized robots which are designed to autonomously navigate and to automatically take some precise plant-specific action. Therefore, these robots bring together AI-based machine vision technology, precision control engineering, and autonomous navigation technology.
Machine vision technology is often a core competency of these robots. In simple robots, the vision system follows a row and identifies all out-of-row living objects as, for example, a weed. This is a mature and easy-to-implement system, but its performance is limited and its future technology development horizon extremely constrained.
In complex systems, the vision technology sees, identifies, localizes crops/weeds, and enables some intelligent site-specific action. Here, the vision systems increasingly use deep learning algorithms often trained on expert-annotated image datasets. The performance of these algorithms exceeds those based on advanced hand-crafted features. For example, a deep learning system might reach a precision, recall, and F1 scores larger than 86.5%, 86.1%, and 0.86, respectively, whilst a conventional system would reach 57%, 50%, and 0.53, respectively. These figures are obviously indicative but nonetheless serve to highlight the performance leaps which trained deep neutral networks (DNN) enable.
Crucially, DNN based approaches open the horizons of the technology roadmap, since the same technique can be utilized to teach the machine to identify all manners of plants. Therefore, this approach will lay down a pathway for progressing from specific plant types towards a more universal system. As such, this approach is not currently operating anywhere near the limits of what it can offer.
Naturally, these systems are not without their own challenges. The data collection and annotation phase can be slow and expensive since the training set needs to capture a diverse range of conditions. This process is becoming increasingly streamlined and partially outsourced and/or semi-automated. In the future, more open data sources will become available too. Furthermore, some degree of software adjustment may be required when entering new environments or locations depending on the dataset. Newer algorithms, however, show that this tweaking can be dramatically minimized.
Both the training and the inference (the running of the trained algorithm) phases of the algorithm can be computationally intensive. The former is usually done on the cloud, but the latter will likely require a GPU on the robot edge. This adds to cost. A key parallel technology trend will however likely help resolve this challenge. This trend is about the development of translational algorithms or mathematics that enable the computational load during inference to be drastically lightened without significant loss of performance. Finally, the speed of the algorithm and the associated processing frames per second (fps) are directly linked with robots’ speed and thus their productivity. As such, incrementally boosting this parameter will be a key development driver.
In parallel to the machine vision, many versions of this emerging robotic class are autonomous. The autonomy challenge is much simpler than a car. The environment is well controlled and predictable, and the speed of travel is low. In many instances, the open-area environment means that RTK-GPS is sufficient although the final precision positioning will likely require a camera or similar. The situation is more complex when operating outside row vegetable farms as the environment may be GPS-poor or the robot may need to intelligently discriminate against a true roadblock an, for example, a tree twig that can be pushed out of the way. This would require more intelligent systems utilising cameras and some trained algorithms.
In general, the technology barriers against autonomous navigation are relatively low. The integration of all the sub-system into a well-orchestrated machine and the system ruggedization for reliable in-farm operation are perhaps more challenging. The legislation is today a hinderance, including in places such as California. However, we expect the legislative framework to become more accommodative soon.
Some companies now consider autonomous mobility not a core competency, instead focusing solely on machine vision and precision actuation. Their bet is that autonomous mobility will become widely available, perhaps as a commodity, in not too distant a future. This is not unreasonable.
The rise of autonomous robots, provided if they require little remote supervision, can alter the economics of machine design, enabling the rise of smaller and slower machines. Indeed, the elimination of the driver overhead per vehicle is the basis of the swarm concept. There is clearly a large productivity gap between current large and heavy vehicles and those composed of fleets of slow small robots. This productivity gap however can and will only narrow as the latter has substantial room for improvement without requiring breakthrough innovation.
Current and Future Commercialization Status
Most firms are today developing robots focused on precision weeding. The general ideas are similar although system design and readiness levels vary greatly. The choice of weeding mechanism is also different (precision chemical spray vs mechanical vs electrical).
The ROI is still hard to calculate. The indications are that a fully autonomous precision chemical weeding robot can reach an ROI of 2-3 years when operated across three crops during the entire season. This ROI metric will certainly improve in the future. Note that the ROI is driven by labour savings, chemical savings, and boosted yields.
The latter is expected to play a crucial role. Note that chemical savings occur because precision spraying can reduce consumption by 90% compared to untargeted application. This has drastic consequences for the agrochemical business, which is today comfortably attached to its blockbuster massive-volume chemicals. The yield can be improved because collateral damage of the crops by untargeted chemical application can be minimized, saving 5-10% yield loss annually. Furthermore, ultra-precision technology can help manage herbicide-resistant weeds which are spreading at rates exceeding 10%/year and in hotspot areas even faster. Last but not the least, the lightweight nature of the robots can prevent soil compaction, keeping more of the soil fertile, thus boosting total output. In short, with these robots, one can do more with less, thus furthering the long-term historical trend towards increasing the productivity of agriculture.
The first generation of these robots were typically small, serving as technology demonstrators. The purpose was to demonstrate technical viability, to gather real in-field feedback to improve hardware design, and to build up a data acquisition loop to enhance the machine vision technology. The current generation of machines are typically larger, faster, and more capable. The first generation of products may have covered 2-3 ha/day, which would have barely been sufficient for even small farms, whereas the current ones are reporting 8-10 ha/day which may be sufficient for many European-sized farms. IDTechEx expect the trend towards high productivity to continue.
The machine vision capabilities have also expanded, covering more crop or weed types. This will widen the applicability of the robots, increasing the utilization rates across the growth calendar of more crops, thus shortening the ROI time.
Furthermore, in parallel, research work has shown that deep learning can be used to enable robots equipped with simple RGB cameras (not hyperspectral sensors) to detect multiple plant-specific diseases. This is interesting because it shows that these robot systems can and will evolve beyond just precision weeding towards precision plant-specific tasks which encompasses the entire lifecycle of farm management.
The business models are mixed. Most are opting for a service or a RaaS business model. The aim is to make it easy for the users to adopt the technology without requiring large upfront capital investment and without fear of rapid technology obsolescence. At the same time, the developers will be able to deploy products early without having to perfect the design until it can be reliability operated without expert intervention. Finally, it affords the developers the opportunity to earn some revenue and retain a data (image) generation mechanism. Some are also pursuing a classical equipment sale support. This mode will become more common in the future as the technology matures, but will include a software service and upgrade subscription element. The latter will be an integral part of the revenue models in the long term as it allows the performance of the older robots to be upgraded over the air over time.
The deployment is still vanishingly small compared to the overall addressable market. This will however change within the IDTechEx forecast period. The IDTechEx report, “Agricultural Robots, Drones, and AI: 2020-2040: Technologies, Markets, and Players“, also provides annual robot sales forecasts, penetration rate forecasts, addressable market forecasts, and RaaS income forecasts. Note that the forecasts in this report are long-term because inevitably this transformation will unfold over multiple decades. The starting gun has already been fired though.
Intelligent Robotic Implements: The Inevitable Next Generation of Agricultural Tools
Advances in vision technology are transforming tractor-pulled implements, upgrading them into intelligent computerized tools able to take plant-specific precise action. The core technology here is also the machine vision, which enables the identification and the localization of specific plants. The implement-based approach does not focus on autonomy although the tractor itself can readily be made autonomous to render the entire system automatic if needed. This system is designed to become competitive in large farms, which demand high productivity, which in turn is linked to technology parameters such as fps (frame per second), false positives, sprayer controller speed, and so on.
Currently, the leading developer, which was acquired for $305M in 2017, is claiming a 40feet wide implement pulled at 12mph covering 12 rows of crops. This system achieves a 2-inch resolution and 20 fps imaging. It deploys 40 cameras and 25 onboard GPUs. In the future, the system costs will likely fall, particularly if lighter versions of the algorithms on the inference side become available to render GPU processors unessential without major performance sacrifice.