Open Journal Systems

Promote the Compression Efficiency of Digital Images by Using Improved CUR Matrix Decomposition Algorithm

Qinghai Jin(Yunnan Minzu University)

Abstract

In order to overcome the problem that the CUR matrix decompositionalgorithm loses a large amount of information when compressing images, the quality of reconstructed images is not high, we propose a CUR matrix decomposition algorithm based on standard deviation sampling. Because of retaining more image information, the reconstructed image quality is higher under the same compression ratio. At the same time, in order to further reduce the amount of image information lost during the sampling process of the CUR matrix decomposition algorithm, we propose the SVD-CUR algorithm. The experimental results verify that our algorithm can achieve high image compression efciency, and also demonstrate the high precision and robustness of CUR matrix decomposition algorithm in dealing with low rank sparse matrix data.


Keywords

Image compression; Standard deviation sampling; CUR matrix decomposition; Singular value decomposition; SVD-CUR

Full Text:

PDF

References

Tingzhu Huang, Shouming Zhong, Zhengliang Li. Matrix Theory[M]. Beijing: Higher Education Press, 2003: 94-95. (in Chinese)

Huiping Yuan. Subunitary Matrix and Submirror Array[J]. Journal of Northeast Normal University (Natural Science), 2001, 33(1): 26-29. (in Chinese)

Xiangfeng Hu, Jinmao Wei. Image Compression Based on Singular Value Decomposition (SVD)[J]. Journal of Northeast Normal University (Natural Science), 2006, 38(3): 36-39. (in Chinese)

Deshpande A, Vempala S. Adaptive sampling and fast low-rank matrix approximation[C]// International Conference on Approximation Algorithms for Combinatorial Optimization Problems, and, International Conference on Randomization and Computation. Springer-Verlag, 2006:292-303.

Williams C K I, Seeger M. The Effect of the Input Density Distribution on Kernel-based Classifiers[C]// Seventeenth International Conference on Machine Learning. Morgan Kaufmann Publishers Inc. 2000:1159-1166.

Williams C K I, Seeger M. Using the Nyström method to speed up kernel machines[C]//Advances in neural information processing systems. 2001: 682-688.

Drineas P, Mahoney M W. On the Nyström method for approximating a Gram matrix for improved kernel-based learning[J]. journal of machine learning research, 2005, 6(Dec): 2153-2175.

Boutsidis C, Woodruff D P. Optimal CUR matrix decompositions [J]. SIAM Journal on Computing, 2017, 46(2): 543-589.

Anand R, Jeffrey D U. Mining of massive datasets[EB/OL]. (2011-01-03)[ 2016-03-07]. http://infolab. stanford, edu/~ ullman/ mmds /book, pdf.

Drineas P, Mahoney M W, Muthukrishnan S. Relative-error CUR matrix decompositions[J]. SIAM Journal on Matrix Analysis and Applications, 2008, 30(2): 844-881.

Weimer M, Karatzoglou A, Smola A. Improving maximum margin matrix factorization[J]. Machine Learning, 2008, 72(3): 263-276.

Chu M T, Lin M M. Low-dimensional polytope approximation and its applications to nonnegative matrix factorization[J]. SIAM Journal on Scientific Computing, 2008, 30(3): 1131-1155.

Ocepek U, Rugelj J, Bosnić Z. Improving matrix factorization recommendations for examples in cold start[J]. Expert Systems with Applications, 2015, 42(19): 6784-6794.

Chickering D M, Heckerman D. Fast learning from sparse data[C]// Fifteenth Conference on Uncertainty in Artificial Intelligence. Morgan Kaufmann Publishers Inc. 1999:109-115.

Barlaud M, Solé P, Gaidon T, et al. Pyramidal lattice vector quantization for multiscale image coding[J]. IEEE Transactions on Image Processing, 1994, 3(4): 367-381.

Wallace G K. The JPEG still picture compression standard[J]. IEEE transactions on consumer electronics, 1992, 38(1): xviii-xxxiv.



DOI: http://dx.doi.org/10.26549/met.v3i1.1185

Refbacks

  • There are currently no refbacks.
Copyright © 2019 Qinghai Jin Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
  • :+65-98550280 QQ:2249355960 :contact@s-p.sg