"Analysis of Quantization based Watermarking"
In order to protect (copy)rights of digital content, means are sought to stop piracy. Several methods are known to be instrumental for achieving this goal. This report considers one such method: digital watermarking, more specific quantization based watermarking methods. A general watermarking scheme consist of a watermark embedder, a channel representing some sort of processing on the watermarked signal, and a watermark detector. The problems related to any watermarking method are the perceptual quality of the watermarked signal, and the possibility to retrieve the embedded information at the detector.
From current quantization based watermarking algorithms, like QIM, DC-QIM, SCS, etc., it is known that the achievable rates are promising, but that it is hard to meet the required robustness demands. Therefore improvements of current algorithms are sought that are more robust against normal processing. This report focusses on two possible improvements, namely the use of error correcting codes (ECC) and the use of adaptive quantization.
Watermarking can be seen as a form of communication. Therefore, the robustness demand for watermarking is equivalent with the demand of reliable communication for communication models. Therefore, the use of ECC gives certainly an improvement in robustness. This is confirmed by experiments. Repetition codes are simple to implement and already gives a gain in robustness. The concatenation of convolutional codes with repetition codes gives an improvement only in the case of mild degradations due to the above mentioned processing.
In this report watermarking of signals with a luminance component are considered, like digital images and video data. Adaptive quantization refers to the use of a larger quantization step size for high luminance values, and a lower quantization step size for low luminance values. It is known from Weber's law that the human eye is less sensitive for brightness changes in higher luminance values, than it is in lower luminance values. Therefore, using adaptive quantization does not come at the cost of a loss of perceptual quality of the host signal. Adaptive quantization gives a large robustness gain for brightness scaling attacks. However, the adaptive quantization step size must be estimated at the detector, which potentially introduces an additional source of errors in the retrieved message. By means of experiments it is shown that this is not such a big problem. Therefore, adaptive quantization improves the robustness of the watermarking scheme.
It is valuable to know the performance of the watermarking scheme with the two improvements. The used performance measure is the bit error probability. The total bit error probability is build up from two components: One estimates the bit error probability for the case of fixed quantization, with an Additive White Gaussian Noise (AWGN) or uniform noise attack; The other estimates the bit error probability for the case of adaptive quantization, without any attack. Models for these two bit error probabilities are developed.
At the embedder the distortion compensation parameter α has to be set. The optimal value for this parameter is derived for the case of a Gaussian host signal and an AWGN channel. The value of this optimal parameter α* is compared with an earlier result of Eggers and is shown to be identical. But whereas Eggers found a numerical function, which he numerically optimizes, our result leads to an analytical function, which can be optimized numerically.
So, we use two methods to improve robustness, namely the use of error correcting codes and an adaptive quantization step size. These two methods are shown to be improvements. Also an analytical model for the performance is derived, which can be used to verify analytically the robustness improvement.