Every one of us loves photographs, right? Those small pieces of laminated paper have a lifetime of memories engraved onto them, some happy, and some not-so-happy. But each one of them definitely means a lot to us, and that is the reason why we insist on preserving them as the timeless mementos. Yes, timeless we call them. But are they really timeless?

Barring the images and photos stored on to the memory card of your mobile phone or laptop, the photographs that feel and look beautiful when they are first developed, can become degraded or tarnished due to environmental pressures. Such issues are not unheard of, and up till recently, restoration of pictures has been a massive challenge . However, there is a ray of hope now that a team from IIT Madras, led by Dr. Rajagopalan, has found a way to preserve the photographs digitally. 

The Image Processing and Computer Vision Lab at IIT Madras has made use of artificial neural networks to restore pictures that have been degraded. They have recently published their work in the IEEE Journal of Selected Topics in Signal Processing, describing the method that they have developed.

It makes use of networks of artificial neural groups, and brings about cleaning of images that have been damaged by environmental agents such as haze, raindrops, rain-streaks, and even motion blur. For studying their model, they made use of available databases on the above environmental agents to gather information for restoring images.

Speaking on the research, Dr. Rajagopalan said, “Bad weather in the form of rain and haze causes significant loss of image quality. The presence of rain-drops on camera lenses is a related problem that poses its own set of challenges. These effects not only impact human visibility but can also adversely affect the performance of computer vision systems meant for autonomous driving, drone imaging, and surveillance, to name a few. These degradations have high spatial variability due to non-uniform depth variations in the haze, drop sizes and their locations in raindrops, and rain streak directions and locations.” 

During their research, they found that a single neural network found it hard to identify and clean the degraded portions of the image, and so, they designed the system to be such that the process occurred in two different stages. The first step, known as Degradation Localisation, had a neural network working to identify and isolate the degraded portion of the picture. On the other hand, the second step, which is the Degraded Region-Guided Restoration, carried out the clearance of the image by making use of the information provided in the first step.

The main aim is to rely on prediction of the degradation mask, to guide the process of restoration. Thus, one of the network layers (involved in step one) carries out the localisation process, and then transfers the information so gathered, to the so-called “main restoration network” or the network used in step two.