Supervised machine learning methods often dismiss human interaction during the learning process, compromising the understanding and confidence of the specialists in the machine's actions. We aim at considerably reducing the required number of labeled examples for machine learning to build explainable and reliable decision-making (-support) systems based on image analysis.
We have investigated computational methods that exploit the superior precision of machines in data processing to extract information from images, free of fatigue, and the superior ability of humans to abstract knowledge from that information. In these methods, the specialists provide some initial knowledge about the image analysis problem, the machine learns and provides feedback about its capability to solve the problem automatically, the specialists evaluate that capability and may intervene in the learning process for its next iteration until their satisfaction. We are also interested in analyzing the consistency of the specialists' actions in order to avoid human errors in the process.