World-leading computer and AI scientists from Google, Waymo, AWS, MIT, ETH Zurich, EPFL, Max-Planck, and Yale will present in Sofia the latest and most exciting research set to fundamentally change technology in the coming years.
INSAIT welcomes software engineers, entrepreneurs, academics, students and deep-tech investors to participate in this unique event. The conference will be held annually as part of INSAIT’s mission of establishing the region as a world-class research and deep tech destination.
The conference spans a diverse set of topics including Machine Learning, Computer Vision, Autonomous Driving, Cybersecurity, Natural Language Processing, Programming Languages, Cryptography, Verification, Computer Architecture, and Programmable Networks.
ACM, ACL, AAAS, and AAAI Fellow
ACM SIGARCH Maurice Wilkes Award, ACM and IEEE Fellow
ERC Starting and Consolidator Grants
Max Planck Fellow
ACM SIGPLAN Robin Milner Award
Day 1 – September 30 (Friday)
|14:00 – 14:10||Prof. Martin Vechev||ETH Zurich / Architect of INSAIT||Opening|
|14:10 – 14:40||Dr. Kristina Toutanova|
Natural Language Processing
|14:40 – 15:10||Dr. Dragomir Anguelov||Waymo|
Computer Vision / Autonomous Driving
|15:10 – 15:40||Prof. Luc Van Gool||ETH Zurich / INSAIT|
|15:40 – 15:50||Break|
|15:50 – 16:20||Prof. Virginia Vassilevska Williams||MIT|
|16:20 – 16:50||Dr. Rupak Majumdar||Max Planck Institute / Amazon|
|16:50 – 17:20||Prof. Martin Odersky||EPFL|
|17:20 – 17:40||Break|
|17:40 – 18:10||Dr. Mariana Raykova|
|18:10 – 18:40||Prof. Mathias Payer||EPFL|
|18:40 – 19:10||Prof. Srdjan Capkun||ETH Zurich|
Day 2 – October 1 (Saturday)
|09:30 – 10:00||Prof. Martin Jaggi||EPFL|
|10:00 – 10:30||Prof. Otmar Hilliges||ETH Zurich|
|10:30 – 11:00||Prof. Martin Vechev||ETH Zurich / Architect of INSAIT|
Trustworthy Machine Learning
|11:00 – 11:10||Break|
|11:10 – 11:40||Prof. Peter Müller||ETH Zurich|
|11:40 – 12:10||Prof. Laurent Vanbever||ETH Zurich|
|12:10 – 12:40||Prof. Onur Mutlu||ETH Zurich|
|12:40 – 13:00||Break|
|13:00 – 13:30||Prof. Ce Zhang||ETH Zurich|
|13:30 – 14:00||Prof. Dragomir Radev||Yale University|
Natural Language Processing
In this presentation, I will talk about our work on natural language interfaces to databases. As part of the Yale Spider project, we have developed three new datasets and launched three matching shared tasks. Spider is a collection of 10,181 manually created natural language questions on databases from 138 domains, and the 5,693 database queries that correspond to them. SParC (Semantic Parsing in Context) consists of 4,298 coherent sequences of questions and the matching queries. Finally, CoSQL consists of WoZ 3k dialogues and a total of 30k turns, and their translations to SQL.
I will then introduce GraPPa, a pre-training approach for table semantic parsing that learns a compositional inductive bias in the joint representations of textual and tabular data. We used GraPPa to obtain SOTA performance on four popular fully supervised and weakly supervised table semantic parsing benchmarks. I will conclude with some recent work on text generation from hybrid inputs, such as structured+unstructured text.
|14:00 – 14:10||Prof. Martin Vechev||ETH Zurich / Architect of INSAIT||Closing|
(03. – 04. October)
|Oct 3., Mon, 3-8 pm*||TBА||Prof. Martin Vechev||Trustworthy Machine Learning|
|Oct 3., Mon, 3-8 pm*||TBА||Dr. Mariana Raykova||What does cryptography study?|
|Oct 4., Tue, 3-8 pm*||TBА||Prof. Dragomir Radev||Learning to build natural language interfaces for database access|
|Oct 4., Tue, 3-8 pm*||TBА||Dr. Nikola Konstantinov||Statistical Machine Learning: Foundations and Present Challenges|
Due to limited spaces, to participate in the tutorials, you must first register for the conference. Then, please follow the “Apply for Tutorials” link below. If selected, you will be notified.
*There will be 2 tutorials per day, each tutorial around will take 90 min, between 3-8pm.
Tutorial on Trustworthy Machine Learning
This tutorial will cover some of the latest advances in building deep learning models with guarantees, including robustness, fairness and general safety. We will cover certification methods based on convex relaxations, branch-and-bound and their combination, both differentiable and otherwise. We will also cover recent methods for training neural networks which make them more amenable to certification, including methods which combine symbolic and differentiable reasoning. The methods are general and applicable to different data modalities including vision, NLP, tabular, etc. In the process we will also outline interesting open directions of both research and industrial interest. By the end of this tutorial, the student should be familiar with some of the latest advances in creating machine learning models with provable guarantees.
Tutorial on ‘What does cryptography study?’
This talk will overview the subject of study of cryptography. We will talk about how we formally model and prove security and privacy properties in cryptography. We will cover some basic concepts such as encryption and digital signatures which are fundamental to any security system. Then, we will introduce advanced cryptography techniques such as secure multiparty computation, homomorphic encryption, zero knowledge proofs, differential privacy, we will discuss the problems that they aim to solve and some construction approaches in existing solutions.
Tutorial on Learning to build natural language interfaces for database access
This talk will introduce the challenges of building natural language interfaces to structured data. We will start with a quick introduction to natural language processing, semantic parsing, as well as sentence and structured data representations. We will then look at the most popular tasks in the domain-independent text-to-data translation, such as WikiTableQuestions, WikiSQL, Spider, and SQUALL and the approaches that achieve the best performance on these tasks. Then, we will switch to the issue of data-to-text generation. We will look at some common representations such as RDF triples and flattened tables. Then we will spend some time on popular tasks such as WebNLG, Rotowire, ToTTo, DART, FetaQA, and LogicNLG. If time allows, we will also look at recent work on pretraining large models on structured data such as TURL, TUTA, TaPaS, TABERT, Grappa, and TABBIE.
Tutorial on Statistical Machine Learning: Foundations and Present Challenges
This tutorial will provide an introduction to the statistical foundations of machine learning theory and describe some recent advances and open problems in the field. We will cover several classic concepts in statistical learning, such as PAC-learnability, complexity measures and generalization. We will also discuss several topics of particular importance in the context of large-scale machine learning, such as the interplay between generalization and optimization and the transferability of ML models across different domains. In the last part of the tutorial we will cover some recent developments in statistical learning concerning metrics other than accuracy, such as robustness and fairness, and outline several open research directions in the area.
|High School Students||Free (see below)|
|University Students||10 BGN|