As a first step toward building novel AI design methodologies in pursuit of this vision, our project focuses on the ability to align human and machine views by grounding meaning in interaction through models that can be flexibly adapted and recombined at run-time. We believe that this is an important ingredient to unlocking new levels of autonomy, building on previous work on combining robotics, language, vision, and symbolic AI research.
We will run three specific pilot (sub-)projects to explore the feasibility of breaking new ground in the area by combining different methods, each focusing on a partial, but important, facet of the problem:
Incremental composition of complex semantic structures in embodied interaction through conversational dialogue – piloted in the domain of teaching surgical assistance robots new skills.
Learning semantically transformable representations to construct compositional models of multimodal real-world data – piloted on the problem of generating narratives from instructional videos.
Learning the vocabulary and “grammar” humans apply when solving a task to enable AI systems to create human-understandable explanations of their own behaviours – piloted on the problem of extracting these models from perceptual data in human image classification tasks.
These pilot projects will deliver models for individual functionalities developed within experimental domains representative of the applications envisioned. Naturally, this is only one possible direction and focus within the wider area, and we hope to mobilise a wider community across the UK that will explore others.
An important overarching goal of the project will be to integrate different approaches in a generalisable design and computational framework, while exploring the human factors and ethical dimensions of establishing the advanced levels of autonomy human-AI collaboration that have to be addressed. This will be jointly developed with the wider Turing community through workshops to establish collaborations and build a pipeline and roadmap for this area.