By employing so-called deep-learning systems for crop recognition, machines that can combat weeds site-specifically with a minimum of labour and/or chemicals are getting ever closer.
Last summer, the Dutch national television made a small report on a large organic arable farm on the young sea clay in the province of Flevoland. Men and women were working in organic parsley, which was due to be harvested mechanically the next day. It still had to be manually stripped of weeds, such as polygonum. The question was whether field robots are already capable of taking over this unpleasant and labour-intensive (i.e. expensive) job.
The farm’s owner shares his thoughts about a robot taking over these chores in his fields in front of the camera. Next year? That seems a bit premature. How about 2 years? Burgers ventures a hesitant ‘yes’, but his body language shows that he still thinks that it is too early.
He is probably right. Even though specialist press and mainstream media show more and more autonomous robots, almost none of these are operational. They can do their rounds in the field but separating crops from weeds is a different thing altogether. It is not easy. Crops have several growth stages in which they look different, light conditions can vary significantly, drought can change crops, etcetera.
With parsley, the robot must perform some intricate tricks: detecting weeds between the harvest-ready crops, cutting them at the root or pulling them out of the ground. That may all be relatively easy for humans, but for machines, detection alone is difficult.
This does not mean that people do not work on automated crop detection on several fronts. I recently talked to a Wageningen University & Research (WUR) researcher in the Netherlands. He works on the development of a flexible and robust detection system that can separate crops from weeds.
The biggest challenge for such a system is dealing with all the variables it might encounter. For example: differences in variety or growth stage and variation in light conditions, such as cloudy weather or bright sun. Machinery manufacturers, agricultural cooperatives and chemical companies among others help this development by sharing their expertise and money. It basically means that it is not just a theoretical exercise.
Recent developments in the field of ‘deep-learning algorithms’ are utilised in this project. A deep-learning algorithm works like this: you start with a blank algorithm that knows nothing and proceed to train it by using examples. You show it an image of a beetroot and say, ‘this is a beetroot’.
If you then proceed to show it a potato plant without any further instructions, the algorithm will think it is a beetroot. However, if you tell the robot that it is in fact a potato plant, it will automatically learn that not all green crops are beetroots.
By doing this, you can refine the algorithm by continuously showing it new images of beetroots and potato plants. After showing it thousands, or rather tens of thousands of examples, the algorithm learns to detect specific aspects of crops. By using the images, it learns which characteristics it should consider.
This will result in the algorithm recognising the colour and shape of a beetroot plant. Therefore, we call such an algorithm ‘self-learning’. Which characteristics the algorithm specifically considers is unknown; call it the large black box. In fact, it does not even matter that much.
This manner of learning corresponds with how children learn to separate things from one another. You show a toddler a car and say ‘car’. After that, another car and then another one. If you show it a tractor, the child will probably call it a car. And you say, ‘no, this is a tractor’.
If you do this over and over without naming the criteria, the child eventually learns to separate a car from a tractor, even in new situations with tractors and cars it has not seen before. So, both the child and the algorithm are self-learning.
Text continues underneath image
With regard to machines, practice shows that correctly separating potatoes from beetroots is not easy. This was recently confirmed during a field test/demo of autonomous control of ground keepers. The researcher had trained the algorithm by using images of ground keepers and beetroots in their early stages.
At first, the system worked exceptionally well. All potatoes were sprayed, whereas the beetroots were nearly all left alone. A week later, things did no go as planned in the same field. The beetroots had grown, and the potato plants were already nearly dead from the last time they were sprayed. The algorithm had never encountered such large beetroots and sprayed potato plants before. This resulted in the fact that several beetroots were sprayed, and several potato plants were not.
This problem can be easily solved by collecting new images of large beetroots and half dead potatoes. Based on these examples, the system can be retrained to deal with this new situation. You should have several examples available for every situation the system might encounter in the field.
This may work for detecting ground keepers in sugar beets, but it is a big challenge if we also want to detect crops and different weed varieties in the future. Especially in the seed stadium, when it easiest to control weeds, separating them from crop seedlings is difficult.
Wageningen University & Research works on a system that can collect new examples, learn and adapt, to scale up the system to multiple crops and weeds. This self-learning system consists of a camera system that makes its rounds through the fields multiple times in the growth season. All plants are detected and mapped from the very beginning.
If the system recognises a plant as a beetroot, then that means that the seedling in that spot was also a beetroot. When the detection system sees a thistle after a while, that means that the seedling in that spot was a thistle too. The algorithm now ‘knows’ what a beetroot and a thistle seedling look like. The system can learn different growth stages and circumstances by itself and by doing that, become more reliable.
The Wageningen researchers are confident about the technology. “What the human eye can detect, machines can detect as well”, they say. Researcher Thijs Ruigrok: “Of course it is difficult to separate one seedling from another. However, a self-learning system can eventually do this. It is a big challenge, but we hope to finish this system within 2 years.”
Subscribe to our newsletter to stay updated about all the need-to-know content in the agricultural sector, two times a week.