Artificial intelligence (AI) systems are notoriously opaque and difficult to debug, posing challenges towards safety. Their massive parallelism renders path analysis and other traditional safety assessment tools inefficient and requires a new methodology to be developed. Simultaneously, any AI system can be examined and understood at different levels of abstraction; each introducing their own sets of challenges to assurance. These levels are, broadly speaking: 1. The physical layer: Actual hardware; manufacturing reliability and ageing issues (very relevant to emerging nanoelectronic technologies, especially non-digital-based). 2. The connectomics layer: The wiring diagram; learning capacity and stability of learning issues. 3. The semantic layer: The vector/concept manipulation level; learning ‘quality’ and ‘relatedness’ issues (how a tester can ‘understand’ what the machine has stored in its memory and affect debugging capability). This project shall therefore tackle assurance in physically implemented AI systems as follows: 1. Map out the different layers of abstraction in AI systems and seek out any structure within each layer: For example in the physical layer safety is affected by a group of hazards related to process variation and mismatch and another group related to environmental conditions (e.g. operation in harsh environments temperature/humidity/pressure-wise). 2. Investigate how hazard factors propagate up the chain of levels of abstraction up to the top layer (the functional layer): This includes understanding what strategies can be employed at different levels in order to mitigate or eliminate uncertainty ‘rising up’ from lower levels (example technique: reducing the resolution of an analogue-to-digital converter until noise is negligible compared to the quantisation step). 3. Dedicate specific effort to understanding the ‘semantic’ layer: This is relatively unexplored territory and relies on the key idea that artificial neural networks (ANNs) may be able to be treated as modular functional blocks performing e.g. classification, encoding of some stimulus or variable etc. which can then be aggregated into more complex systems for general symbol manipulation. At that level debugging a complex ANN becomes much simpler because the encoding and symbol manipulation operations can be debugged at this higher level of abstraction where symbols are treated as numbers independent of the detail in underlying ANN function. Thus at the end of the project (3 year horizon) we expect to have: 1. A roadmap of the hazard environment in AI systems across their different levels of abstraction, including fine detail of each level and indications on the interactions between layers. 2. A methodology for designing AI systems that can be described as ‘design-for-assurance’, similar to the ‘design-for-manufacture’ principle in standard electronics. 3. A good overview of emerging nanotechnologies and how they fit into the safety in AI picture through the physical layer.
Yihan Pan received her Bachelor degree (BEng) in Electronic Engineering at the University of Manchester in 2018. In 2019, She is awarded a master degree by Imperial College London in Analog and Digital Integrated Circuit Design. Her master project focuses on brain-inspired arrays, building a novel platform by combining the field of neuromorphic and chemical sensing. She is currently a PhD student at the University of Southampton.
Alex has experience with emerging nanoelectronic memory technologies in the domains of artificial neural networks, biosignal processing and ‘trimmed digital’ systems, quickly summarised by one publication in Nature Communications for each respective area. These are: doi: 10.1038/ncomms12611 (RRAM ANNs), doi: 10.1038/ncomms12805 (biosignal processing), https://doi.org/10.1038/s41467-018-04624-8 (trimmable digital systems). Alex is currently running a grant on assurance in AI systems in collaboration with Thales, the Univ. of Manchester and dstl.