Titles, abstracts, slides, videos and bios
Session 1: Architecture
Edinburgh Academic Presenters
Software-Exposed Microarchitecture: An Idea Whose Time Has Come
Boris Grot: boris.grot@ed.ac.uk Slides Video WebPage
Moore’s Law has been the engine that powered decades of improvement in semiconductor technology. Over that time, generations of computer architects converted the transistor bounty of Moore’s law into performance through architectural and microarchitectural innovation. Alas, Moore’s Law is dead and the question facing the processor industry is how to deliver more performance without more transistors. In this talk, I will argue that the path forward lies in opening up of microarchitecture to the software. Doing so opens a myriad of opportunities to improve, simplify and generally rethink computer architecture.
Bio: Boris Grot is an Associate Professor in the School of Informatics at the University of Edinburgh. His research is aimed at understanding and alleviating efficiency bottlenecks and capability shortcomings of processing platforms for data-intensive applications. Boris is a member of the MICRO Hall of Fame, and a recipient of multiple awards for his research, including two IEEE Micro Top Picks. Boris holds a PhD in Computer Science from The University of Texas at Austin.
Revolutionizing mobile and cloud via coherence
Bio: Vijay Nagarajan is an Associate Professor with interests spanning computer architecture, programming languages and computer systems. He is the lead author of the latest edition of the Primer on Memory Consistency and Cache Coherence, which has been downloaded ~14K times. Vijay is a recipient of an Intel Early Career Faculty Award, a PACT best-paper award, and two IEEE Top-Picks Honourable mentions.
How to get rid of the legacy – Hardware/Software solutions for the future
Bjoern Franke: bfranke@inf.ed.ac.uk Slides Video WebPage
Matching Hardware to Software Enables Heterogeneous Hardware Design
Michael O’Boyle: mob@inf.ed.ac.uk Slides Video WebPage
Moore’s Law has been the main driver behind theextraordinary success of computer systems. However, with the technology roadmap showing a decline in transistor scaling computer systems are increasingly specialised and diverse. As it stands, software will simply not fit and current compiler technology is incapable of bridging the gap. This talk describes our work in automatically matching legacy software to heterogeneous hardware with minimal user involvement.
Bio: Michael O’Boyle is a professor of computer science at the University of Edinburgh. He is best known for his work in incorporating machine learning into compilation and parallelization. He has published over 150 papers, receiving six best paper and two Test of Time awaSlidrds. He is the director of the ARM/Edinburgh Research Centre of Excellence, a senior EPSRC Research Fellow and a Fellow of the BCS.
Edinburgh Students Presentations
Supporting Hardware Accelerators with Program Synthesis
Jackson Woodruff: j.c.woodruff@sms.ed.ac.uk Slides Video WebPage
Benchmarking, analysis, and optimization of serverless function snapshots
Dmitrii Ustiugov: dmitrii.ustiugov@ed.ac.uk Slides Video WebPage
Serverless computing has seen rapid adoption due to its high scalability and flexible, pay-as-you-go billing model. In serverless, developers structure their services as a collection of functions, sporadically invoked by various events like clicks. High inter-arrival time variability of function invocations motivates the providers to start new function instances upon each invocation, leading to significant cold-start delays that degrade user experience. To reduce cold-start latency, the industry has turned to snapshotting, whereby an image of a fully-booted function is stored on disk, enabling a faster invocation compared to booting a function from scratch.
This work introduces vHive, an open-source framework for serverless experimentation with the goal of enabling researchers to study and innovate across the entire serverless stack. Using vHive, we characterize a state-of-the-art snapshot-based serverless infrastructure, based on industry-leading Containerd orchestration framework and Firecracker hypervisor technologies. We find that the execution time of a function started from a snapshot is 95% higher, on average, than when the same function is memory-resident. We show that the high latency is attributable to frequent page faults as the function’s state is brought from disk into guest memory one page at a time. Our analysis further reveals that functions access the same stable working set of pages across different invocations of the same function. By leveraging this insight, we build REAP, a light-weight software mechanism for serverless hosts that records functions’ stable working set of guest memory pages and proactively prefetches it from disk into memory. Compared to baseline snapshotting, REAP slashes the cold-start delays by 3.7x, on average.
Improving Reliability and Performance of Datacenter Systems via Coherence
Adarsh Patil: adarsh.patil@ed.ac.uk Slides Video WebPage
In this talk, I will present 2 works, where we aim to design tailored coherence protocols for improving reliability and performance of modern shared memory hardware.
In the first work, we aim to combat increased memory system failure rates. We propose Dvé, a hardware-driven replication mechanism where data blocks are replicated in 2 different sockets across a cache-coherent NUMA system. Each data block is also accompanied by a code with strong error detection capabilities so that when an error is detected, correction is performed using the replica. Such an organization has the advantage of offering two independent points of access to data which enables: (a) strong error correction that can recover from a range of faults affecting any of the components in the memory, upto and including the memory controller, and (b) higher performance by providing another nearer point of memory access. Dvé realizes both of these benefits via Coherent Replication, a technique that builds on top of existing cache coherence protocols for not only keeping the replicas in sync for reliability, but also to provide coherent access to the replicas during fault-free operation for performance. Dvé can flexibly provide these benefits on-demand by simply using the provisioned memory capacity which, as reported in recent studi es, is often underutilized in today’s systems. Thus, Dvé introduces a unique design point that offers higher reliability and performance for workloads that do not require the entire memory capacity.
In the second work, we aim to provide improved performance and availability for Function-as-a-Sevice deployments. For this, we propose to employ a disaggregated memory backend to share memory segments between multiple servers that host function instances. We enable such a shared memory organization by providing suitable address mapping and translation services. The resulting organization already provides the ability to employ existing low-latency hardware caches and provides automatic/implicit data transfer without any software intervention. Our goal is to further improve performance with a bespoke inter-node coherence protocol specifically for FaaS application sharing characteristics. We also aim to provide suitable memory consistency and availability guarantees during partial failures.
Bio:
Adarsh Patil is a 3rd year PhD student at the University of Edinburgh. His research focus lies broadly in the area of memory systems design. Notably his works have targeted optimizing DRAMs for heterogeneous architectures, TLB organization for virtualization and coherence protocols for reliability. Formerly he was a Research Scientist at Intel where he worked on software memory optimizations for HPC applications and neural networks. He holds a masters degree in computer science from Indian Institute of Science.
Session 2: Security
Edinburgh Academic Presenters
High-Performance, Secure, Reliable Systems
Sam Ainsworth: sam.ainsworth@ed.ac.uk Slides Video WebPage
David Aspinall
Edinburgh Students Presentations
Data generation and sanitisation for machine learning systems in security-sensitive contexts.
Rob Flood: s1784464@sms.ed.ac.uk Slides Video WebPage
Currently, the quality of many publicly available datasets containing security-relevant information is low. This is because data taken from operational environments often contains the personally identifiable information of the employees or partners of that organisation. Releasing such data without sufficient sanitisation is illegal under a growing number of legal frameworks, such as GDPR, a risk that few organisations are willing to undertake. Many publicly available datasets are thus synthetically generated, created via an artificial process rather than collected from real-world environments. However, the realism of such synthetically generated data is often poor, and there are many open questions with regards to how training with such data impacts the performance of machine learning classifiers. My research thus far attempts to answer some of these questions by focusing on the generation and sanitisation of data in a rigorous manner.
Bio:
I’m Rob Flood, I’m a first-year PhD student at LFCS at the University of Edinburgh under David Aspinall. I have an undergraduate degree in Mathematics from Trinity College Dublin and an MSc in Computer Science from the University of Edinburgh. My research focuses on synthetic data generation and data sanitisation for machine learning systems as well as the robustness and security properties of these systems.
Session 3: ML/IoT
Edinburgh Academic Presenters
Choosing neural networks without training them.
Amos Storkey: a.storkey@ed.ac.uk Slides Video WebPage
Neural Architecture search is a costly and often cumbersome business. There are many architectures that perform poorly, and often more noise than signal in the search objective for architectures that perform well.
In this talk I will briefly discuss the key to approaches we have recently developed that examine information available from the function described by neural networks _before learning_ that give reasonable indicators of final network performance, and describe the potential for use in hardware-aware network learning and neural architecture search.
This work is in conjunction with Joseph Mellor, Jack Turner and Elliot J. Crowley.
Bio:
Amos Storkey is Professor of Machine Learning and AI in the School of Informatics, University of Edinburgh, with a background in mathematics (MA Maths Trinity, Cambridge) and theoretical physics (Part III Maths) before focusing on Machine Learning (MSc, PhD Imperial London) . He moved to Edinburgh after his PhD, where he now leads a research team focused on deep neural networks, Bayesian and probabilistic models, transactional machine learning and efficient inference. He is currently director of the EPSRC Centre for Doctoral Training in Data Science, has been Programme Chair for the AI and Statistics conference and was founder of the Edinburgh Deep Learning Workshop.
Future Efficient Distributed AI Systems
IoT Security on the Edge
Edinburgh Students Presentations
NetSentry: A Deep Learning Approach to Detecting Incipient Large-scale Network Attacks
Bio:
Towards Secure and Resilient IoT Infrastructures: an AI Perspective
Alec is a second-year PhD student at the University of Edinburgh. Previously worked for three years as a machine learning research engineer. His current research seeks to bridge the gap between the ever-evolving nature of cyber threats and the security and privacy of users’ data on networked systems, by using Artificial Intelligence to build automatic network threat detection and counteraction mechanisms.
Comments are closed
Comments to this thread have been closed by the post author or by an administrator.