Human beings have started to generate programs and machines whose functioning is not transparent or interpretable. It would be convenient to regulate their development.
An article published by Andrés Ortega Klein at the Spanish news media «eldiario.es» dwells on the future of Artificial Intelligence and, in particular, on the challenges posed by the questions of “interpretability” and “auditability” of computer programs and machines. In order to ensure their “transparency”, new machines should not be mere “black boxes” in which we know what goes in and what comes out, but not what goes on inside. Users should be able to understand the skill, intentions and situational constraints of the programs. The article mentions current research work on this topic by Rafael García, a Research Engineer at IMDEA Networks Institute.
2nd Image source: © EFE
Link to the original news item in Spanish published by eldiario.es: ‘Máquinas que no entendemos’.