0
DAYS
The COVID-19 pandemic has imposed unprecedented changes in our lives. After much consideration, General Chairs and Organizing Committee for COPA 2020 have decided to make the conference fully virtual. The events of the past few months and the continued safety concerns have led us to make this difficult decision. Our focus now is on creating conference in fully online mode, with virtual session rooms and facilities for online presentation of talks, so that the attendees will have an experience very close to the real conference. Every effort will be also taken for publishing the conference proceedings by the Proceedings of Machine Learning Research (PMLR) at earliest possible time.
- Abstract submission: April 27th
- Paper deadline: May 4th May 11th (extended)
- Author notification: July 3rd July 7th
- Camera-ready: July 10th July 21st
- Poster (extended abstract up to 2 pages) submission: July 21st
Dr. Rauf Izmailov, Perspecta Labs.
Prof. Marco Campi, University of Brescia.
Doctor Rauf Izmailov
Perspecta Labs
Professor Vladimir Vapnik
Facebook AI Research
Complete Statistical Theory of Learning: Learning Using Statistical Invariants
Statistical theory of learning devoted to methods for which the obtained approximations converge to the desired function with increasing number of observations.
The theory uses instruments that provide convergence in L2 norm of space of functions, i.e., the so-called strong mode of convergence. However, in Hilbert space, along with convergence in the space of functions there also exists the so-called weak mode of convergence, i.e., convergence in the space of functionals.
The weak mode of convergence (under some conditions) also leads to the convergence of approximations to the desired function in L2 metrics, although it uses different mechanisms to achieve that convergence.
This talk discusses new learning methods which use both mechanisms (weak and strong) of convergence simultaneously. Such methods allow one
(1) to find the admissible set of functions (i.e., the collection of an appropriate approximating functions) and
(2) to find the desired approximation in this set. Since only two modes of convergence exist in Hilbert space, we call the theory that uses both of them the complete statistical theory of learning.
Along with general reasoning, the talk presents algorithms which constitute a new learning paradigm called Learning Using Statistical Invariants (LUSI). LUSI algorithms were developed for sets of functions belonging to Reproducing Kernel Hilbert Space (RKHS), which include modified SVM method (LUSI-SVM method).
Also, talk presents LUSI modification of Neural Networks methods (LUSI-NN). LUSI methods require for learning less training examples.
In conclusion, the talk discusses the general (philosophical) framework of new learning paradigm including concept of intelligence.
Dr. Rauf Izmailov is a Chief Research Scientist and Fellow at Perspecta Labs and an established researcher in mathematical and computer models for machine learning, optimization, statistical data analysis, and networking systems.
He has almost 35 years of industry experience (including AT&T Bell Labs and NEC Labs America) in research and technical leadership of R&D teams, and more than 150 published papers.
With Prof. Vapnik, he co-invented the new ML paradigm, Learning Using Privileged Information (LUPI), and, most recently, Learning Using Statistical Invariants (LUSI).
Prof. Vapnik is one of the main developers of the Vapnik–Chervonenkis theory of statistical learning, and the co-inventor of the support-vector machine method, and support-vector clustering algorithm.
Vapnik was inducted into the U.S. National Academy of Engineering in 2006. He received the 2005 Gabor Award, the 2008 Paris Kanellakis Award, the 2010 Neural Networks Pioneer Award, the 2012 IEEE Frank Rosenblatt Award, the 2012 Benjamin Franklin Medal in Computer and Cognitive Science from the Franklin Institute, the 2013 C&C Prize from the NEC C&C Foundation, the 2014 Kampé de Fériet Award, the 2017 IEEE John von Neumann Medal.
Professor Marco Campi
Department of Information Engineering, University of Brescia, Italy
Risk and Complexity in Prediction Problems
(joint work with Simone Garatti)
We show that the risk (probability with which a shortfall occurs) and complexity (number of relevant data-points to determine a solution) are strictly connected concepts in a general data-driven decision-making framework.
When these concepts are applied to prediction, they lead to extremely tight evaluations of the distribution of the probability of prediction error.
These results are relevant to all approaches in prediction theory that contain a compression scheme, including SVM (Support Vector Machine), SVR (Support Vector regression), SVDD (Support Vector Data Description) and others.
Marco C. Campi has taught topics related to data-driven and inductive methods for many years. He is a distinguished lecturer of the Control Systems Society and has chaired the Technical Committee IFAC on Modeling, Identification, and Signal Processing in the past six years.
Professor Campi has held visiting and teaching appointments at several institutions and has served in various capacities on the editorial boards of Automatica, Systems and Control Letters, and the European Journal of Control. He is a Fellow of IEEE, a fellow of IFAC, and a recipient of the IEEE CSS George S. Axelby outstanding paper award. He has delivered plenary addresses at major conferences, including Optimization, CDC, MTNS, and SYSID.
General Paper
"Constructing normalized nonconformity measures based on maximizing predictive efficiency"
Anthony Bellotti
Student Paper
"Conformal multi-target regression using neural networks"
Soundouss Messoudi, Sébastien Destercke and Sylvain Rousseau
Student Paper
"Classification of aerosol particles using inductive conformal prediction"
Linn Karlsson, Henrik Boström and Paul Zieger
Tutorials
Chair: TBD
"Conformal Prediction"
Henrik Linusson
(Home Page)
Predictive distributions (PD) can be informally seen a form of regression where, instead of a single value, the prediction is a probability distribution on the label space.
While PDs tend to be associated primarily with Bayesian methods, the tutorial will introduce Conformal Predictive Distributions (CPD), which are instead formulated in a frequentist framework and as such do not require priors.
CPD are closely related to Conformal Predictors, with which they share some properties. Some examples from real-world applications will illustrate the use of CPDs.
A discussion of suitable metrics and diagnostic tools to evaluate probabilistic predictions will conclude the talk.
"Venn Predictors"
Ulf Johansson
(Home Page)
This tutorial first covers the basics of Venn predictors, including the special case of Venn-Abers predictors.
After that, we consider Venn predictors in the context of trustworthiness. How to make decisions and recommendations from AI systems trustworthy is currently a key question for uptake and acceptance, often determining the success of such systems.
Trustworthiness is also the ultimate goal of both Explainable AI (XAI) and Fairness, Accountability and Transparency (FAT).
"Conformal predictive distributions"
Paolo Toccaceli
(Home Page)
Predictive distributions (PD) can be informally seen a form of regression where, instead of a single value, the prediction is a probability distribution on the label space.
While PDs tend to be associated primarily with Bayesian methods, the tutorial will introduce Conformal Predictive Distributions (CPD), which are instead formulated in a frequentist framework and as such do not require priors.
CPD are closely related to Conformal Predictors, with which they share some properties. Some examples from real-world applications will illustrate the use of CPDs.
A discussion of suitable metrics and diagnostic tools to evaluate probabilistic predictions will conclude the talk.
"Deep learning and Conformal Prediction"
Siamak Mehrkanoon
(Home Page)
Deep learning and artificial neural networks based models have significantly improved the state-of-the-art techniques in machine learning and data driven modeling, thanks to the increasing computing power as well as vast amount of available data.
On the other hand, powerful kernel based models have also been successfully applied in different applications with well-established theoretical analysis. This tutorial aims at explaining synergies between neural networks, deep learning, kernel based methods and
introducing some new models developed in this context.
Opening remarks
Regression and predictive distributions
Chair: Claus Bentdsen
"Conformal multi-target regression using neural networks"
Soundouss Messoudi, Sébastien Destercke and Sylvain Rousseau
"Evaluating different approaches to calibrating conformal predictive systems"
Hugo Werner, Lars Carlsson, Ernst Ahlberg and Henrik Boström
"Mondrian Conformal regressors"
Henrik Boström and Ulf Johansson
"Conformal calibrators"
Vladimir Vovk, Ivan Petej, Paolo Toccaceli, Alexander Gammerman, Ernst Ahlberg and Lars Carlsson
Training Conformal Predictors
Chair: Giovanni Cherubin
"Training conformal predictors"
Nicolo Colombo and Vladimir Vovk
"Constructing normalized nonconformity measures based on maximizing predictive efficiency"
Anthony Bellotti
Invited talk 1
Chair: Alexander Gammerman
"Complete statistical theory of learning: learning using statistical invariants"
Dr. Rauf Izmailov (Chief Research Scientist, Perspecta Labs, USA)
Time series forecasting and online learning
Chair: Vladimir Vovk
"Practical investment with the long-short game"
Najim Al-Baghdadi, David Lindsay, Yuri Kalnishkan and Sian Lindsay
"Mixing past predictions"
Alexander Korotin, Vladimir Vyugin and Evgeny Burnaev
"Application of Conformal Prediction interval estimations to market makers' net positions"
Wojciech Wisniewski, David Lindsay and Sian Lindsay
"A histogram based betting function for Conformal martingales"
Charalambos Eliades and Harris Papadopoulos
Conformal Prediction and learning problems
Chair: Evgueni Smirnov
"Conformal anomaly detection for visual reconstruction using gestalt principles"
Ilia Nouretdinov, Alexander Balinsky and Alexander Gammerman
"A Conformalized density-based clustering analysis of malicious traffic for botnet detection"
Bahareh Mohammadi Kiani
"Fast probabilistic prediction for kernel SVM via enclosing balls"
Nery Riquelme-Granada, Khuong An Nguyen and Zhiyuan Luo
Poster session
Chair: Zhiyuan Luo
"Evaluation and extension of inductive Venn-Abers predictive distribution"
Ilia Nouretdinov, James Gammerman and Daljit Rehal
"A transformer conformal predictor for paraphrase detection"
Patrizio Giovannotti
Invited talk 2
Chair: Vladimir Vovk
"Risk and complexity in prediction problems"
Prof. Marco Campi (Department of Information Engineering, University of Brescia, Italy)
Applications of Conformal Prediction
Chair: Paolo Toccaceli
"Classification of aerosol particles using inductive conformal prediction"
Linn Karlsson, Henrik Boström and Paul Zieger
"BERT based Conformal Prediction for sentiment analysis"
Lysimachos E. Maltoudoglou, Andreas Paisios and Harris Papadopoulos
"Batch mode active learning for mitotic phenotypes using conformal prediction"
Adam Corrigan, Philip Hopcroft, Ana Narvaez and Claus Bendtsen
Best paper award and Closing address
The main purpose of conformal prediction is to complement predictions delivered by various algorithms of Machine Learning with provably valid measures of their accuracy and reliability under the assumption that the observations are independent and identically distributed. It was originally developed in the late 1990s and early 2000s but has become more popular and further developed in important directions in recent years.
Conformal prediction is a universal tool in several senses; in particular, it can be used in combination with any known machine-learning algorithm, such as SVM, Neural Networks, Ridge Regression, etc. It has been applied to a variety of problems from diagnostics of depression to drug discovery to the behaviour of bots.
A sister method of Venn prediction was developed at the same time as conformal prediction and is used for probabilistic prediction in the context of classification. Among recent developments are adaptations of conformal and Venn predictors to probabilistic prediction in the context of regression.
The COPA series of workshops is a home for work in both conformal and Venn prediction, as reflected in its full name “Conformal and Probabilistic Prediction with Applications”. The aim of this symposium is to serve as a forum for the presentation of new and ongoing work and the exchange of ideas between researchers on any aspect of Conformal and Probabilistic Prediction and their applications to interesting problems in any field.
Topics of the symposium include, but are not limited to:
Authors are invited to submit original, English-language research contributions or experience reports. Papers should be no longer than 20 pages formatted according to the well-known JMLR (Journal of Machine Learning Research) style. The LaTeX package for the style is available here. All aspects of the submission and notification process will be handled online via the EasyChair Conference System at:
There will be two Alexey Chervonenkis awards for the Best Paper and Best Student Paper, presented at the conference.
Researchers interested in Conformal Prediction may be interested in joining our online discussion group. Future announcements and related materials will be published regularly.
All accepted papers will be presented at the conference and published by the Proceedings of Machine Learning Research (PMLR). Volume 128 at http://proceedings.mlr.press/v128/
Please make sure to use the correct style file; it should be already installed on your computer, or will be installed "on the fly". The camera-ready papers should follow the style of the Proceedings section of JMLR rather than JMLR itself.
In the end you should submit two files: (1) file containing your paper in the pdf format; (2) copyright form (http://proceedings.mlr.press/pmlr-license-agreement.pdf). Please upload the final version of your paper in pdf via EasyChair and email the signed copyright form to zhiyuan.luo@rhul.ac.uk
The beginning of your file will look like (note that we are waiting for the volume number to be allocated):
\documentclass[wcp]{jmlr} \usepackage{amsmath,amssymb,graphicx,url} \jmlrvolume{128} \jmlryear{2020} \jmlrworkshop{Conformal and Probabilistic Prediction and Applications} \jmlrproceedings{PMLR}{Proceedings of Machine Learning Research} \title{Nonparametric predictive distributions based on conformal prediction} \author{\Name{John Smith}\Email{j.smith@gmail.com}\\ \addr{Royal Holloway, University of London, Egham, Surrey, UK}\\ \Name{Minge Shen}\Email{m.shen@gmail.com}\\ \addr{Rutgers University, New Brunswick, NJ, USA}} \editor{Alexander Gammerman, Vladimir Vovk, Zhiyuan Luo, Evgueni Smirnov and Giovanni Cherubin} \begin{abstract} This paper applies conformal prediction to derive predictive distributions that are valid under a nonparametric assumption. \end{abstract} \begin{keywords} Conformal prediction, predictive distributions, regression. \end{keywords}
Notice the presence of the command
\jmlrproceedings{PMLR}{Proceedings of Machine Learning Research}
It has become necessary after the recent rebranding http://proceedings.mlr.press of JMLR W&CP as the Proceedings of Machine Learning Research (PMLR).
Please let us know if you encounter any problems following these instructions.
For further information (which, however, can be confusing), see http://jmlr.org/proceedings/faq.html (under the heading "What is the Style File for the Proceedings?"); COPA 2020 uses the one column style file. The style file is available from the CTAN (Comprehensive TeX Archive Network) web site at http://ctan.org/tex-archive/macros/latex/contrib/jmlr you can go inside the directory called "sample_papers" and emulate the files jmlr_sample.tex and jmlr_sample.bib (the latter is only needed if you use bibtex).
A lot of useful advice on the JMLR style can be found at http://jmlr.org/format/format.html (however, please make sure to use the Proceedings style, as described above, rather than the main journal style).
Royal Holloway, University of London
Egham, United Kingdom