Project 3: invertible probability function In a normalizing flow model, the decoding function


Project 3: invertible probability function

In a normalizing flow model, the decoding function is designed to be the exact inverse of the encoding function and quick to calculate, giving normalizing flows the property of tractability. However, neural networks are not by default invertible functions. This project is about creating an invertible process. Suppose we have a probability distribution  

· p(x_1,x_2) = (x_1-1) * x_2 / 9  

· ‘p’ is defined over a rectangle [1 4] and [0 2] 

· ‘p’ integrates to 1 over the domain of the distribution 

Submit the following: 

1. show and explain the equations that are used for technique known as change of variables, that enable creation of an invertible process

2. show the final transformed function that has proper probability distribution

3. explain in short, the meaning of each equation under 1. and the meaning of final equation under 2 

Share This Post

Email
WhatsApp
Facebook
Twitter
LinkedIn
Pinterest
Reddit

Order a Similar Paper and get 15% Discount on your First Order

Related Questions