- Abstract submission: April 27th, 2020
- Paper deadline: May 4th, 2020
- Author notification: June 20th, 2020 - Camera-ready: July 10th, 2020
Prof. Vladimir Vapnik, Facebook AI Research.
Professor Vladimir Vapnik
Facebook AI Research
Complete Statistical Theory of Learning: Learning Using Statistical Invariants
(co-authored with Dr Rauf Izmailov)
Statistical theory of learning devoted to methods for which the obtained approximations converge to the desired function with increasing number of observations.
The theory uses instruments that provide convergence in L2 norm of space of functions, i.e., the so-called strong mode of convergence. However, in Hilbert space, along with convergence in the space of functions there also exists the so-called weak mode of convergence, i.e., convergence in the space of functionals.
The weak mode of convergence (under some conditions) also leads to the convergence of approximations to the desired function in L2 metrics, although it uses different mechanisms to achieve that convergence.
This talk discusses new learning methods which use both mechanisms (weak and strong) of convergence simultaneously. Such methods allow one
(1) to find the admissible set of functions (i.e., the collection of an appropriate approximating functions) and
(2) to find the desired approximation in this set. Since only two modes of convergence exist in Hilbert space, we call the theory that uses both of them the complete statistical theory of learning.
Along with general reasoning, the talk presents algorithms which constitute a new learning paradigm called Learning Using Statistical Invariants (LUSI). LUSI algorithms were developed for sets of functions belonging to Reproducing Kernel Hilbert Space (RKHS), which include modified SVM method (LUSI-SVM method). Also, talk presents LUSI modification of Neural Networks methods (LUSI-NN). LUSI methods require for learning less training examples.
In conclusion, the talk discusses the general (philosophical) framework of new learning paradigm including concept of intelligence.
Prof. Vapnik is one of the main developers of the Vapnik–Chervonenkis theory of statistical learning, and the co-inventor of the support-vector machine method, and support-vector clustering algorithm.
Vapnik was inducted into the U.S. National Academy of Engineering in 2006. He received the 2005 Gabor Award, the 2008 Paris Kanellakis Award, the 2010 Neural Networks Pioneer Award, the 2012 IEEE Frank Rosenblatt Award, the 2012 Benjamin Franklin Medal in Computer and Cognitive Science from the Franklin Institute, the 2013 C&C Prize from the NEC C&C Foundation, the 2014 Kampé de Fériet Award, the 2017 IEEE John von Neumann Medal. In 2018, he received the Kolmogorov Medal from University of London and delivered the Kolmogorov Lecture.
The main purpose of conformal prediction is to complement predictions delivered by various algorithms of Machine Learning with provably valid measures of their accuracy and reliability under the assumption that the observations are independent and identically distributed. It was originally developed in the late 1990s and early 2000s but has become more popular and further developed in important directions in recent years.
Conformal prediction is a universal tool in several senses; in particular, it can be used in combination with any known machine-learning algorithm, such as SVM, Neural Networks, Ridge Regression, etc. It has been applied to a variety of problems from diagnostics of depression to drug discovery to the behaviour of bots.
A sister method of Venn prediction was developed at the same time as conformal prediction and is used for probabilistic prediction in the context of classification. Among recent developments are adaptations of conformal and Venn predictors to probabilistic prediction in the context of regression.
The COPA series of workshops is a home for work in both conformal and Venn prediction, as reflected in its full name “Conformal and Probabilistic Prediction with Applications”. The aim of this symposium is to serve as a forum for the presentation of new and ongoing work and the exchange of ideas between researchers on any aspect of Conformal and Probabilistic Prediction and their applications to interesting problems in any field.
Topics of the symposium include, but are not limited to:
Authors are invited to submit original, English-language research contributions or experience reports. Papers should be no longer than 20 pages formatted according to the well-known JMLR (Journal of Machine Learning Research) style. The LaTeX package for the style is available here. All aspects of the submission and notification process will be handled online via the EasyChair Conference System at:
There will be two Alexey Chervonenkis awards for the Best Paper and Best Student Paper, presented at the conference.
Researchers interested in Conformal Prediction may be interested in joining our online discussion group. Future announcements and related materials will be published regularly.