Data compression, or source coding or bit-rate reduction whatever you call, is a process of reducing the physical capacity of data by encoding the information. Once the information is compressed, a fewer bits are needed to represent the original data before compression.
In information theory and computer science, data compression, or bit-rate reduction or source coding, involves encoding information using less bits than the original representation. The process of compression can be of two types: Either lossy or lossless.
- Lossy compression leads in the reduction of bits by identifying the unnecessary information and then removing it.
- Lossless: This compression reduces bits by identifying them and thus eliminating statistical redundancy. No information is lost in this kind of compression.
The process of making the size of a data file less is referred to as the data compression. If we consider the context of the data transmission, it is also known as source coding (encoding done at the source of the data before it is actually stored or transmitted) in opposition to the channel coding.
Data compression has made different sorts of technological advancements, and each of us are using it almost each and everyday, whether in text emails, messages, downloading images or when watching HDTV from the internet.
But wait, this article is about the history of Data Compression, so let’s follow this beautiful infographic and learn more about the progress of data compression that happened in the last 170+ years.