Blog

BLOG

Learning by Denoising Part 2. Connection between data distribution and denoising function

In this blog post, we complement our first post and examine denoising from a more analytic perspective with detailed mathematical derivations. We will show that there is a unique two-way connection between the uncorrupted data distribution \(p(x)\) and the optimal denoising function \(g(\tilde x)\), provided that the corruption noise is Gaussian. The corrupted distribution \(p(\tilde x)\) plays a central role (more…)

Learning by Denoising Part 1: What and why of denoising

Unsupervised learning tasks support the main tasks of supervised training by somehow modeling the input distribution, \(p(x)\). Denoising is no exception. When we use denoising as an auxiliary task, we are not interested in denoising itself, neither are we interested in taking samples from \(p(x)\) or computing the probability of the data. What we want is to extract features that describe the data and which are useful for our primary task of supervised learning. (more…)

Curious AI Blog is here

The Curious AI Company blog kicks off with a series of posts called “Learning by denoising”. In this series, we want to demonstrate why denoising is a good (for certain tasks, perhaps the best) way to do unsupervised learning. (more…)

Curious AI Blog

The Curious AI Blog

Come hither! Deeper learning topics from the minds behind Ladder Network and Tagger!

This blog is a gentle introduction to the state-of-the-art machine learning technology from The Curious AI Company, culminating in the Ladder and Tagger networks. In other words, read on if you’re interested in unsupervised and semi-supervised classification and perceptual grouping tasks!