Ethics Society Policy
Explainable AI aims to make agricultural decisions more transparent
Explainable AI is emerging as a key tool in agriculture to leverage data and automation without the 'black box' problem. A comprehensive review published in the Artificial Intelligence Review journal compiles recent developments on how AI decisions can be made understandable to humans for the needs of sustainable agriculture.
Agriculture has adopted AI to improve productivity and efficiency, but at the same time, the opacity of models has raised doubts about their reliability, especially in critical fields like medicine and agriculture. The review highlights that research on explainable AI began to gain clear momentum in 2017 due to these concerns related to trust and dependency.
The authors describe in the review key explanation methods by which AI model predictions can be justified. These include local, model-agnostic explanations (Local Interpretable Model-agnostic Explanations, LIME), explanations based on Shapley values (SHapley Additive Explanations, SHAP), and a method that produces 'attention maps' related to model decisions by emphasizing gradients (Gradient-weighted Class Activation). The idea is that the explanation reveals which factors or parts of the observation influenced the decision and how.
The core message of the review is that increasing explainability can enhance the usability and acceptability of AI in agricultural decision-making: when the model's justifications are visible, its dependency can be better assessed, and the technology can be more easily applied in a way that supports sustainability.
Source: Leveraging explainable AI for sustainable agriculture: a comprehensive review of recent advances, Artificial Intelligence Review.
Agriculture has adopted AI to improve productivity and efficiency, but at the same time, the opacity of models has raised doubts about their reliability, especially in critical fields like medicine and agriculture. The review highlights that research on explainable AI began to gain clear momentum in 2017 due to these concerns related to trust and dependency.
The authors describe in the review key explanation methods by which AI model predictions can be justified. These include local, model-agnostic explanations (Local Interpretable Model-agnostic Explanations, LIME), explanations based on Shapley values (SHapley Additive Explanations, SHAP), and a method that produces 'attention maps' related to model decisions by emphasizing gradients (Gradient-weighted Class Activation). The idea is that the explanation reveals which factors or parts of the observation influenced the decision and how.
The core message of the review is that increasing explainability can enhance the usability and acceptability of AI in agricultural decision-making: when the model's justifications are visible, its dependency can be better assessed, and the technology can be more easily applied in a way that supports sustainability.
Source: Leveraging explainable AI for sustainable agriculture: a comprehensive review of recent advances, Artificial Intelligence Review.
This text was generated with AI assistance and may contain errors. Please verify details from the original source.
Original research: Leveraging explainable AI for sustainable agriculture: a comprehensive review of recent advances
Publisher: Artificial Intelligence Review
Authors: Aditya Rajbongshi, Fatema Tuz Johora, ... Mohammad Ali Moni
January 17, 2026
Read original →