Keeping Humans-in-the-Loop for More Accurate Algorithmic Models
To connect the research paper ‘Assessing The Technical Feasibility of Conflict Prediction for Anticipatory Action‘ with some of the ideas that were introduced in our ‘Ethical Data Futures’ course, it would be both ethical and result in more accurate algorithmic prediction models, if the forecasting process keeps a human-in-the-loop. In the reading ‘Ethical Dimensions of Visualization Research‘ the “separation between the people impacted by the data and the people consuming the data” is of a major ethical issue. The people who are turned into data, for those that are consuming the data or decision-makers that are of no part of the community/environment in which the decision has to be made, generally have no say in how they will be affected by those decisions. To both combat this ethical problem, and to increase the accuracy of algorithmic decision-making and forecasting models, the first paper recommends including individuals that are native to the environment under analysis because they would be better able to ‘identify difficult to discern patterns or unmeasurable quantities than machine learning algorithms can miss.’
In regards to my final project, I still have to do more research on the current methods that are being used for the construction of conflict forecasting models. However, so far I have not seen much about including humans inputs for conflict prediction. This may be due to the large amount of resources and time that would have to go into managing and the development process of these models.