Prof.
John
Calsamiglia
Universitat Autònoma de Barcelona
The standard paradigm in quantum statistical inference protocols, including quantum metrology and quantum hypothesis testing, is to optimize the accuracy of the estimates given a fixed sample size of quantum data. But why a fixed size, if less could be enough?
I will present two problems that go beyond this paradigm in that the sample size is not fixed in advance. Instead quantum data is analyzed sequentially on-the-fly and thereby measurement outcomes obtained so far can be used to re-define the protocol and even stop, quite crucially, any further sampling.
The first problem is sequential quantum hypothesis testing, where the goal is to discriminate between two arbitrary quantum states, with a prescribed error threshold. I will discuss the proof for the ultimate quantum limit on the average number of copies needed to accomplish this task. I will show that sequential strategies outperform the currently established ultimate limits based on fixed number of copies. I will finally discuss the quantum change point detection problem. Here, a source emits quantum particles in a default state, until a certain moment when it undergoes a drastic change and starts producing a different state. The task is then to find out the moment that the change occurred.