Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Titles, abstracts, slides, videos and bios

Session 1: Architecture

Edinburgh Academic Presenters

Software-Exposed Microarchitecture: An Idea Whose Time Has Come

Boris Grot: boris.grot@ed.ac.uk Slides Video WebPage

Moore’s Law has been the engine that powered decades of improvement in semiconductor technology. Over that time, generations of computer architects converted the transistor bounty of Moore’s law into performance through architectural and microarchitectural innovation. Alas, Moore’s Law is dead and the question facing the processor industry is how to deliver more performance without more transistors. In this talk, I will argue that the path forward lies in opening up of microarchitecture to the software. Doing so opens a myriad of opportunities to improve, simplify and generally rethink computer architecture.

Bio: Boris Grot is an Associate Professor in the School of Informatics at the University of Edinburgh. His research is aimed at understanding and alleviating efficiency bottlenecks and capability shortcomings of processing platforms for data-intensive applications. Boris is a member of the MICRO Hall of Fame, and a recipient of multiple awards for his research, including two IEEE Micro Top Picks. Boris holds a PhD in Computer Science from The University of Texas at Austin.

Revolutionizing mobile and cloud via coherence
Vijay Nagarajan: vijay.nagarajan@ed.ac.uk Slides Video WebPage

Coherence protocols are ARM’s bread and butter.  ARM is likely the first to release detailed coherence specifications publicly, has top-notch design and verification teams; ARM groks coherence.   What’s there to do?
Lot’s more.  I will show that raising the abstraction of coherence protocol design can potentially lead to: (1) correct-by-construction protocols, saving on verification cost; (2)  easier integration of hierarchical and heterogeneous protocols, making life easier for ARM’s clients;  (3)  answers to programmability and data movement challenges faced by today’s and tomorrow’s heterogeneous SoCs.
Second, I will argue that cloud programmers are craving for inter-node cache-coherent shared memory.  They just don’t know it yet.  Business as usual coherence protocols will not do, however.  The datacentre needs a new family of fault-tolerant, programmable coherence protocols.  ARM can lead the way.

Bio: Vijay Nagarajan is an Associate Professor with interests spanning computer architecture, programming languages and computer systems. He is the lead author of the latest edition of the Primer on Memory Consistency and Cache Coherence, which has been downloaded ~14K times. Vijay is a recipient of an Intel Early Career Faculty Award, a PACT best-paper award, and two IEEE Top-Picks Honourable mentions.

How to get rid of the legacy – Hardware/Software solutions for the future

Bjoern Franke: bfranke@inf.ed.ac.uk Slides Video WebPage

Legacy systems are all around us – from processor architectures through application software to essential software development tools like compilers. Legacy systems, either hardware or software, represent value based on prior investment, and owners of legacy systems are less likely to embrace architectural innovations when faced with unpredictable follow-on costs for porting or recreating their legacy systems. In this talk, we will focus on four themes outlining how novel tools can help maintaining this value, either through guided evolution or migration. Advances in simulation technology enable processor architects to incrementally develop new versions of existing GPUs, while our related dynamic binary translation technology enables users to run their existing legacy binary code at high performance on host platforms featuring a different ISA. Source code rejuvenation approaches, exemplified in this talk by well-known collection data types, demonstrate the power of abstraction in making software portable, whilst simultaneously reducing source code complexity by adopting new language or library features. Finally, compilers themselves are identified as legacy software systems stifling innovation in the architecture space, and a possible solution to this problem is outlined. Pathways for software evolution and migration are of relevance to Arm, its partners and customers in a dynamic market place, where legacy software systems outlive hardware systems.

Bio: Björn Franke’s research interests span a range of topics in the field of software transformation, ranging from compiler and code optimisation, through parallelisation, dynamic binary translation and JIT compilation. He has a track record of successful industrial collaborations developing e.g. code size compression techniques with Qualcomm, or high-performance processor simulation technology with Synopsys, and influencing major software products (e.g. Facebook’s HHVM or Google’s V8 Javascript engine) through his research on concurrent and parallel JIT compilation.

Matching Hardware to Software Enables Heterogeneous Hardware Design

Michael O’Boyle: mob@inf.ed.ac.uk Slides Video  WebPage

Moore’s Law has been the main driver behind theextraordinary success of computer systems. However, with the technology roadmap showing a decline in transistor scaling computer systems are increasingly specialised and diverse. As it stands, software will simply not fit and current compiler technology is incapable of bridging the gap. This talk describes our work in automatically matching legacy software to heterogeneous hardware with minimal user involvement.

Q: Why is this of interest to ARM?
 A: If we can match legacy software to new hardware automatically, it enables innovation in hardware design. More importantly, it directs attention to designs that are likely to match existing and emerging customer applications.
It also describes recent work in combining neural architecture search with program transformation exploration, software defined hardware and tackling more complex acceleration
00.00 Why ARM should be interested
03.40 Matching Hardware to Software
15.24 Neural Architecture Search as Program Transformation Exploration
27.51 Complex Acceleration

Bio: Michael O’Boyle is a professor of computer science at the University of Edinburgh. He is best known for his work in incorporating machine learning into compilation and parallelization. He has published over 150 papers, receiving six best paper and two Test of Time awaSlidrds. He is the director of the ARM/Edinburgh Research Centre of Excellence, a senior EPSRC Research Fellow and a Fellow of the BCS.


Edinburgh Students Presentations

Supporting   Hardware Accelerators with Program Synthesis

Jackson Woodruff: j.c.woodruff@sms.ed.ac.uk Slides Video WebPage

Hardware accelerators present the potential for significant power/performance improvements over general purpose processors.  However, exposing the details, and inflexibility, of hardware to programmers makes them challenging to use and removes most portability.  In my talk, I will discuss the potential of program synthesis to close the gap between traditional software development and hardware accelerators and dig into concrete examples of regular expression accelerators and Fourier Transform accelerators.
Bio:
Jackson is a PhD student at the University of Edinburgh, working on compiler support for hardware accelerators.  He completed his undergrad and masters at the University of Cambridge.
Benchmarking, analysis, and optimization of serverless function snapshots

Dmitrii Ustiugov: dmitrii.ustiugov@ed.ac.uk Slides Video WebPage

Serverless computing has seen rapid adoption due to its high scalability and flexible, pay-as-you-go billing model. In serverless, developers structure their services as a collection of functions, sporadically invoked by various events like clicks. High inter-arrival time variability of function invocations motivates the providers to start new function instances upon each invocation, leading to significant cold-start delays that degrade user experience. To reduce cold-start latency, the industry has turned to snapshotting, whereby an image of a fully-booted function is stored on disk, enabling a faster invocation compared to booting a function from scratch.
This work introduces vHive, an open-source framework for serverless experimentation with the goal of enabling researchers to study and innovate across the entire serverless stack. Using vHive, we characterize a state-of-the-art snapshot-based serverless infrastructure, based on industry-leading Containerd orchestration framework and Firecracker hypervisor technologies. We find that the execution time of a function started from a snapshot is 95% higher, on average, than when the same function is memory-resident. We show that the high latency is attributable to frequent page faults as the function’s state is brought from disk into guest memory one page at a time. Our analysis further reveals that functions access the same stable working set of pages across different invocations of the same function. By leveraging this insight, we build REAP, a light-weight software mechanism for serverless hosts that records functions’ stable working set of guest memory pages and proactively prefetches it from disk into memory. Compared to baseline snapshotting, REAP slashes the cold-start delays by 3.7x, on average.

 
Bio:
Dmitrii is a final-year PhD student, co-advised by Prof. Boris Grot and Prof. Edouard Bugnion (EPFL). Dmitrii’s research interests lie at the intersection of Computer Systems and Architecture with a current focus on support for cloud and serverless computing.
Improving Reliability and Performance of Datacenter Systems via Coherence

Adarsh Patil: adarsh.patil@ed.ac.uk Slides Video WebPage

In this talk, I will present 2 works, where we aim to design tailored coherence protocols for improving reliability and performance of modern  shared memory hardware.

In the first work, we aim to combat increased memory system failure rates. We propose Dvé, a hardware-driven replication mechanism where data blocks are replicated in 2 different sockets across a cache-coherent NUMA system. Each data block is also accompanied by a code with strong error detection capabilities so that when an error is detected, correction is performed using the replica. Such an organization has the advantage of offering two independent points of access to data which enables: (a) strong error correction that can recover from a range of faults affecting any of the components in the memory, upto and including the memory controller, and    (b) higher performance by providing another nearer point of memory access. Dvé realizes both of these benefits via Coherent Replication, a  technique that builds on top of existing cache coherence protocols for not only keeping the replicas in sync for reliability, but also to provide coherent access to the replicas during fault-free operation for performance. Dvé can flexibly provide these benefits on-demand by simply using the provisioned memory capacity which, as reported in recent studi es, is often underutilized in today’s systems. Thus, Dvé introduces a unique design point that offers higher reliability and performance for workloads that do not require the entire memory capacity.

In the second work, we aim to provide improved performance and availability for Function-as-a-Sevice deployments. For this, we propose to employ a disaggregated memory backend to share memory segments between multiple servers that host function instances. We enable such a shared memory organization by providing suitable address mapping and translation services. The resulting organization already provides the ability to employ existing low-latency hardware caches and provides automatic/implicit data transfer without any software intervention. Our goal is to further improve performance with a bespoke inter-node coherence protocol specifically for FaaS application sharing characteristics. We also aim to provide suitable memory consistency and availability guarantees during  partial failures.

Bio:

Adarsh Patil is a 3rd year PhD student at the University of Edinburgh. His research focus lies broadly in the area of memory systems design. Notably his works have targeted optimizing DRAMs for heterogeneous architectures, TLB organization for virtualization and coherence protocols for reliability. Formerly he was a Research Scientist at Intel where he worked on software memory optimizations for HPC applications and neural networks. He holds a masters degree in computer science from Indian Institute of Science.


Session 2:  Security

Edinburgh Academic Presenters

High-Performance, Secure, Reliable Systems

Sam Ainsworth: sam.ainsworth@ed.ac.uk Slides Video WebPage

This is a whistle-stop tour through a variety of architectural, software and runtime techniques to make systems perform better (via memory-level parallelism), to achieve fault tolerance against bit-flip errors, and to defend against some of the most critical attacks on today’s systems at very low overheads. We’ll look through both past work, currently active projects, and the insights behind future directions — and discuss the various ways that Arm has been directly involved in the research along each track.

Bio:
Sam is a Lecturer in Systems and Hardware Security at the University of Edinburgh. His PhD (Cambridge, 2018) was funded by a CASE award with Arm, with whom he holds two joint patents, on programmable prefetchers and hardware fault tolerance.

David Aspinall


Edinburgh Students Presentations

Data generation and sanitisation for machine learning systems in security-sensitive contexts.

Rob Flood: s1784464@sms.ed.ac.uk Slides Video WebPage

Currently, the quality of many publicly available datasets containing security-relevant information is low. This is because data taken from operational environments often contains the personally identifiable information of the employees or partners of that organisation. Releasing such data without sufficient sanitisation is illegal under a growing number of legal frameworks, such as GDPR, a risk that few organisations are willing to undertake. Many publicly available datasets are thus synthetically generated, created via an artificial process rather than collected from real-world environments. However, the realism of such synthetically generated data is often poor, and there are many open questions with regards to how training with such data impacts the performance of machine learning classifiers. My research thus far attempts to answer some of these questions by focusing on the generation and sanitisation of data in a rigorous manner.

Bio:

I’m Rob Flood, I’m a first-year PhD student at LFCS at the University of Edinburgh under David Aspinall. I have an undergraduate degree in Mathematics from Trinity College Dublin and an MSc in Computer Science from the University of Edinburgh. My research focuses on synthetic data generation and data sanitisation for machine learning systems as well as the robustness and security properties of these systems.


Session 3: ML/IoT

Edinburgh Academic Presenters

Choosing neural networks without training them.

Amos Storkey: a.storkey@ed.ac.uk Slides Video WebPage

Neural Architecture search is a costly and often cumbersome business. There are many architectures that perform poorly, and often more noise than signal in the search objective for architectures that perform well.

In this talk I will briefly discuss the key to approaches we have recently developed that examine information available from the function described by neural networks _before learning_ that give reasonable indicators of final network performance, and describe the potential for use in hardware-aware network learning and neural architecture search.

This work is in conjunction with Joseph Mellor, Jack Turner and Elliot J. Crowley.

Bio:

Amos Storkey is Professor of Machine Learning and AI  in the School of Informatics, University of Edinburgh, with a background in mathematics (MA Maths Trinity, Cambridge) and theoretical physics (Part III Maths) before focusing on Machine Learning (MSc, PhD Imperial London) . He moved to Edinburgh after his PhD, where he now leads a research team focused on deep neural networks, Bayesian and probabilistic models, transactional machine learning and efficient inference.  He is currently director of the EPSRC Centre for Doctoral Training in Data Science, has been Programme Chair for the AI and Statistics conference and was founder of the Edinburgh Deep Learning Workshop.

 
Future Efficient Distributed AI Systems
Luo Mai:  luo.mai@ed.ac.uk Slides Video WebPage
Distributed AI systems are the key to fulfil the promise of AI technologies. The execution of such systems often consumes tremendous resources on the cloud, edge and endpoints, making high-efficiency a key design goal in AI systems. In this talk, I will describe the opportunities of leveraging ARM technologies to build future efficient distributed AI systems. I will also introduce my recent research projects which have led to several efficient AI systems. These systems contain novel proposals that can significantlyimprove (1) hardware efficiency, (2) statistical efficiency and (3) configuration efficiency of large-scale AI clusters.
Bio: Luo Mai is an Assistant Professor at the School of Informatics, University of Edinburgh, where he leads the large-scale AI systems group. His research has led to publications in prestigious venues, and popular open-source AI libraries, such as TensorLayer, HyperPose and KungFu. Before joining Edinburgh, Luo was a research associate at Imperial College London and a visiting researcher at Microsoft Research. Luo received his PhD from Imperial College London in 2018, and his study was supported by a Google PhD Fellowship.

 

IoT Security on the Edge

Paul Patras: paul.patras@ed.ac.uk Slides Video WebPage

The volume of mobile traffic continues to grow exponentially, but as we put more and more gadgets online, so does the number of cyber attacks. More worryingly, security vulnerabilities are uncovered even in connectivity technologies considered to be mature, as they see widespread adoption in IoT settings. In this talk I will present my team’s recent work which demonstrates that billions of devices packing Bluetooth can be tracked at scale, while intrusion detection system adopting ML can be subverted. I will also show how ML should be used in a principled way to detect network threats in their infancy and how to built such detection systems with a view to deployment at the edge.
Bio: 
Paul Patras is a Reader (Associate Professor) in Informatics at the University of Edinburgh, where he leads the Mobile Intelligence Lab. His research crosses the boundaries between mobile networking, security, and data science. His team has pioneered several applications of AI to the analysis, security, and management of mobile systems. Paul is also a co-founder of Net AI, a university spin-out whose mission is to put mobile network management on autopilot in the cloud.

Edinburgh Students Presentations

NetSentry: A Deep Learning Approach to Detecting Incipient Large-scale Network Attacks

Haoyu Liu: haoyu.liu@ed.ac.uk Slides Video WebPage

ML techniques are increasingly adopted to tackle ever-evolving high-profile network attacks, including DDoS, botnet, and ransomware, due to their unique ability to extract complex patterns hidden in data streams. These approaches are however routinely validated with data collected in the same environment, and their performance degrades when deployed in different network topologies and/or applied on previously unseen traffic, as we uncover. This suggests malicious/benign behaviors are largely learned superficially and ML-based NIDS need revisiting, to be effective in practice.
In this paper we dive into the mechanics of large-scale network attacks, with a view to understanding how to use ML for NID in a principled way. We reveal that, although cyberattacks vary significantly in terms of payloads, vectors and targets, their early stages, which are critical to successful attack outcomes, share many similarities and exhibit important temporal correlations. Therefore, we treat NID as a time-sensitive task and propose NetSentry, perhaps the first of its kind NIDS that builds on Bi-ALSTM, an original ensemble of sequential neural models, to detect network threats before they spread. We cross-evaluate our architecture using two practical datasets, training on one and testing on the other, and demonstrate F1 score gains above 33\% over the state-of-the-art, as well as up to 3 times higher rates of detecting attacks such asXSS and web bruteforce. Further, we put forward a novel data augmentation technique that boosts the generalization abilities of a broad range of supervised deep learning algorithms, leading to average F1 score gains above~35\%.

Bio:

Haoyu Liu is a second-year PhD student in the School of Informatics at the University of Edinburgh. He received B.Sc. degrees from the University of Edinburgh and the South China University of Technology in May 2019. His research focuses on developing machine learning tools to prevent threats from network security and privacy domains.
Towards Secure and Resilient IoT Infrastructures: an AI Perspective
Alec Diallo: alec.frenn@ed.ac.uk Slides Video WebPage
With increasingly complex network infrastructures and the proliferation of IoT devices, existing cyber defense solutions are quickly becoming obsolete in the face of a rapidly transforming threat landscape. This exponential increase of potential attack vectors, combined with ubiquitous, and most recently AI-enabled cyber-attacks, has motivated the use of AI technologies to effectively tackle these new threats. However, while AI-based solutions make detecting, securing, and mitigating attacks more accurate, they also introduce a new class of attack vectors commonly referred to as adversarial attacks. In this talk, I will present ACID, our AI-based network intrusion detection system which optimally distinguishes different types of network traffic; and briefly introduce our current work aiming at protecting ML systems against adversarial attacks.
Bio:

Alec is a second-year PhD student at the University of Edinburgh. Previously worked for three years as a machine learning research engineer. His current research seeks to bridge the gap between the ever-evolving nature of cyber threats and the security and privacy of users’ data on networked systems, by using Artificial Intelligence to build automatic network threat detection and counteraction mechanisms.

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel