This course is organised into multiple units. While I have tried my best to align units to weeks, but sometimes we will cover parts of multiple units in the same week. We will provide
Week 1: (Unit 1) Information and probabilistic modelling: information, uncertainty, basic concepts of probability, Markov inequality, limit theorems
Week 2: (Unit 2) Uncertainty, compression, and entropy: source model, motivating examples, a compression problem, Shannon entropy, random hash
Week 3: (Unit 3) Randomness and entropy: uncertainty and randomness, Total variation distance, generating uniform bits, generating from uniform bits, typicak sets and entropy
Week 4: (Unit 4) Information and statistical inference-1: Hypothesis testing and estimation, examples, the log-likelihood ratio test, Kullback-Leibler divergence and Stein's lemma, properties of KL divergence
Week 5: (Unit 5) Information and statistical inference-2: Information per coin toss, multiple hypothesis testing, mutual information, Fano's inequality
Week 6: (Unit 6) Properties of measures of information-1: Definitions, chain rule, shape of information functions (boundedness, concavity/convexity, non negativity), data processing inequality
Week 7: (Unit 7) Properties of measures of information-2: Proof of Fano's inequality, variational formulae, capacity as information radius, proof of Pinsker's inequality, continuity of entropy; (Unit 8) Information theoretic lower bounds: Lower bound for source coding, lower bound for Stein's lemma
Week 8: (Unit 8 continued) lower bound for randomness generation, strong converse, lower bound for minmax estimation; (Unit 9) Compression 1: Variable length source codes
Week 9-12: We will post the exact plan soon. Basically, we will cover compression, channel coding, and quantisation in the remaining 4 weeks.
DOWNLOAD APP
FOLLOW US