The paper – published by NJAS: Impact in Agricultural and Life Sciences – is titled “Managing the risks of artificial intelligence in agriculture”. The primary focus is the ethical issues raised by the use of AI in agriculture.
The lead author is Professor Robert Sparrow, a Monash philosopher whose work occupies the tricky territories regarding the ethical, social and political impacts of new technologies.
At issue in farming are the benefits and risks of using more and more machine learning and artificial intelligence, both in large-scale industrial farming “agribusiness” and also smaller properties.
Family farms in Australia are already able to use (human-piloted) drones and, soon enough, tractors with an “autopilot” capability . A Monash engineering team is developing an autonomous harvesting robot for apples.
“Farmers now can use GPS-enabled tractors that can traverse fields without active supervision from the driver,” Professor Sparrow says. “Increasingly in Australia, farmers are still in the cabin, but they're working on their laptops while the machine drives up and down.
“They're also using sensor technologies to gather data about soil moisture in order to change watering regimes. In greenhouses, there's a company offering robotic pollination of plants, using a robot that looks a bit like a toy truck with an air puffer to puff pollen. Fruit packers are using machine learning in packaging for quality control of fruit and vegetables.”
Where will the farmer fit in?
In industrial farming, automation and AI are far more widespread. Predicting climate changes also relies on machine learning.
The paper looks at what may happen next, and what it means, not for the economy necessarily, or the GDP of Australia, but for the role and status of the “farmer”.
Co-authors are Monash research fellow in philosophy Mark Howard and the University of Wollongong’s Associate Professor Chris Degeling, a philosopher and social scientist.
“The future of agriculture is of vital importance to all of us,” Professor Sparrow says. “You can't look at what's happening with climate change or with ecosystems collapsing without thinking that the future of farming is a big part of the future of life on this planet. Not just human life, but animal life.
“I’ve been working on the ethics of robotics for a long time,” he says. “I started on it in about 1998, and I've tried to be just ahead of where the technology is about to emerge. I like to work where there’s genuine philosophical work to be done.”
Which brings us back to “the world is not data, plants and animals are not machines”. The key question this raises, and the paper addresses in a set of benefits and risks, is to do with the changing face of agricultural work, potentially away from the plants and animals, and towards the data.
The paper cites US research from 2015 in Rural Sociology journal on the “de-masculinisation” of agribusiness, with the Australian researchers adding: “…by allowing fewer human beings to supervise more machines, AI systems will make farmers’ jobs more white-collar and professional.
“In the future, management of the farm may differ little from management of any other complex enterprise carried out by teams of humans and robots. [The US paper argues] that a transformation of the cultural image of the farmer from a person involved in manual labour on the land to a white-collar manager is already being promoted in the advertising of fertiliser, pesticide, seed and farm machinery manufacturers.
“An emphasis on ‘management’ and attention to data as key skills in farming is in turn likely to transform farmers’ relation to the land and landscape, and to their crops and animals, and further attenuate their relationship to the history of the practices in which they are engaged.”
Put more simply, says Professor Sparrow: “This could change the experience of being a farmer as they spend more and more time managing IT systems.”
Losing connection with the land
The risk is losing contact with the ground, the water and the skies.
“We’re already losing touch with the natural world that sustains us,” Professor Sparrow says, “and I think that's quite dangerous. But these technologies are the culmination or an extension of existing technologies. It's already the case that most people have no idea where their food comes from.
“Labour practices in agriculture can be problematic, as are animal welfare issues. It’s hard not to worry, given the history of the impact of technology in agriculture, that AI won't exacerbate these dynamics.”
The paper is a picture of agriculture at the crossroads. How big and (post) modern should a farm actually be? And isn’t being a farmer all about the interactions between human and nature? What are the ethics of removing people from the process of producing the food people eat? Can the involvement of fewer people make farming better?
“There’s two different visions of farming for the future,” Professor Sparrow says. “One is more localised, more biodiverse, more small-scale enterprises. The other vision is of more efficient, high-technology, large farms, at economies of scale.
“The latter is obviously more productive in the short term, but whether it's actually capable of delivering food security into the future would be much more controversial.
“It would be a mistake to say, ‘Never use a robot where a human could do the job’. I think these technologies do have a potential to free people from dangerous work, miserable work, work that people would rather not do.
“The question is, of course, what opportunities replace those job opportunities? People who are keen on robots and AI think that they will create more jobs elsewhere. I'm not convinced that's the case.”
Questions of human choices
Professor Sparrow leads a team of researchers who have secured Australian Research Council funding for three years to extend the work into AI and agriculture; the time and money will be partly spent surveying rural and regional Australian farming interests, including actual farmers. He’ll also address a conference in the United States on the topic in June.
“The questions we raise are old concerns,” he says. “We used to call it ‘technocratic rationality’ – where you stop thinking about why, and you only think about how. When you make that transition from a rich, sensuous reality to the data in the problem that you’re solving, there’s always important stuff that is left out.
“We do have choices here. People can choose to embrace these technologies or not. They can resist parts and embrace other parts. These are human choices.”