Data Compression Category
Data Compression is an approach to compress the origin dataset and save spaces. According to the Economist reports, the amount of digital dat in the world is growing explosively, which increase from 1.2 zettabytes to 1.8 zettabytes in 2010 and 2011. So how to compress data and manage storage cost-effectively is a challenging and important task.
Traditionally, we use compression algorithms to achieve data reduction. The main idea of data compression is "use the fewest number of bits to represent an information as accurately as possible". What we want to do is to represent the origin data information as accurately as possible, so it allows us to ignore some useless information when converting the encoded data to represented data. We can classify the classical compression approach into lossless compression and lossy compression. The difference between them is the loss of unnecessary information.
For lossless compression, it reduces data by identifying and eliminating statistical redundancy in reversible fashion. For removing redundant information. It can use statistical properties to build a new encoding system, like Huffman coding. Or it can use dictionary model, replacing the repeated strings with slide window algorithm. What a matter is that for a lossless compression, when we restore the data, we can get the origin data without losing any information.
For lossy compression, it reduces data by identifying unnecessary information and irretrievably removing it. For the removing unnecessary information, unnecessary information indeed has its own information, which may not be useful in some particular field. So it means lossy compression. In some filed, we just need useful information, and ignore useless information, so lossy compression methods works in Image, Audio, and Video. So we can't get the origin data when we use lossy compression algorithm.
For a lossless approach, when data become larger, eliminating statistical redundancy is unacceptable. Lossless approach needs data statistic information, counting all information. So for a large dataset, it must tradeoff between speed and compression ratio.
There are two methods to compress data, delta compression and data deduplication.
Delta compression is a new perspective to compress two very similar files. It compares two files, A and B, and calculates the delta A-B, so file B can be expressed as file A + delta A-B, which can save space. Delta compression is generally used in source code version, synchronization.
Data deduplication target large-scale system, which has a big granularity (file level or 8K kb size chunk level) the reason why using chunk-level instead of file level in data deduplication is chunk-level can achieve better compression performance. In general, data deduplication splits the back-up data into chunks, and identifies a chunk by its own cryptographically secure hash (SHA-1) signature. For some same chunks, it will remove the duplicate data chunks and store only one copy of that to achieve the goal (saving the space). It will only store the unique chunk, and file metadata, which can be used to reconstruct the origin file.
Data Compression Category的更多相关文章
- SQL SERVER ->> Data Compression
最近做了一个关于数据压缩的项目,要把整个SQL SERVER服务器下所有的表对象要改成页压缩.于是趁此机会了解了一下SQL SERVER下压缩技术. 这篇文章几乎就是完全指导手册了 https://t ...
- Programming Assignment 5: Burrows–Wheeler Data Compression
编程作业五 作业链接:Burrows-Wheeler Data Compression & Checklist 我的代码:MoveToFront.java & CircularSuff ...
- dimensionality reduction动机---data compression(使算法提速)
data compression可以使数据占用更少的空间,并且能使算法提速 什么是dimensionality reduction(维数约简) 例1:比如说我们有一些数据,它有很多很多的feat ...
- Intent中的四个重要属性——Action、Data、Category、Extras
Intent作为联系各Activity之间的纽带,其作用并不仅仅只限于简单的数据传递.通过其自带的属性,其实可以方便的完成很多较为复杂的操作.例如直接调用拨号功能.直接自动调用合适的程序打开不同类型的 ...
- <转>四个重要属性——Action、Data、Category、Extras
Intent作为联系各Activity之间的纽带,其作用并不仅仅只限于简单的数据传递.通过其自带的属性,其实可以方便的完成很多较为复杂的操作.例如直接调用拨号功能.直接自动调用合适的程序打开不同类型的 ...
- Data Compression
数据压缩 introduction 压缩数据可以节省存储数据需要的空间和传输数据需要的时间,虽然摩尔定律说集成芯片上的晶体管每 18-24 个月翻一倍,帕金森定律说数据会自己拓展来填满可用空间,但数据 ...
- Hive 压缩技术Data Compression
Mapreducwe 执行流程 :input > map > shuffle > reduce > output 压缩执行时间,map 之后,压缩,数据存储在本地磁盘,减少磁盘 ...
- 吴恩达机器学习笔记48-降维目标:数据压缩与可视化(Motivation of Dimensionality Reduction : Data Compression & Visualization)
目标一:数据压缩 除了聚类,还有第二种类型的无监督学习问题称为降维.有几个不同的的原因使你可能想要做降维.一是数据压缩,数据压缩不仅允许我们压缩数据,因而使用较少的计算机内存或磁盘空间,而且它也让我们 ...
- 【转】The most comprehensive Data Science learning plan for 2017
I joined Analytics Vidhya as an intern last summer. I had no clue what was in store for me. I had be ...
随机推荐
- spring源码分析系列3:BeanFactory核心容器的研究
目录 @(spring源码分析系列3:核心容器的研究) 在讲容器之前,再明确一下知识点. BeanDefinition是Bean在容器的描述.BeanDefinition与Bean不是一个东西. Be ...
- java架构之路-(分布式zookeeper)zookeeper真实使用场景
上几次博客,我说了一下Zookeeper的简单使用和API的使用,我们接下来看一下他的真实场景. 一.分布式集群管理✨✨✨ 我们现在有这样一个需求,请先抛开Zookeeper是集群还是单机的概念,下面 ...
- UE制作PBR材质攻略Part 1 - 色彩知识
目录 一.前言 二.色彩知识 2.1 色彩理论 2.1.1 成像原理 2.1.2 色彩模型和色彩空间 2.1.3 色彩属性 2.1.4 直方图 2.1.5 色调曲线 2.1.6 线性空间与Gamma空 ...
- display——table-cell属性
display的table和table-cell一般情况下用的不多,所以很少有人去关注它,但他们两个联手起来会给你惊喜! 当两个或者两个以上标签一起使用显示在同一行时,以前常用的是float.posi ...
- github基本使用---从零开始
1.使用之前首先得有账号(附链接):https://github.com/ 2.注册帐号之后得有方便上传项目的工具git bash下载安装 https://gitforwindows.org/ 3.启 ...
- python编程基础之二十二
字典:字典属于可变对象,但是不属于序列,内部是通过哈希方式存储的,内部保存的是一个个键值对key:value 字典的键是唯一的, 字典查找速度比较快 d1 = {} #括号里面用键值对表示 d2 = ...
- 记录一次Metaspace扩容引发FGC的调优总结
开始之前 在开始之前先记录一个我碰到的jvm调优的坑.那就是… 为啥我配置到idea64exe.vmoptions中的参数没有生效??? 由于之前一直是在mac上开发,本地开发时当需要优化jvm参数的 ...
- 面试官,不要再问我“Java GC垃圾回收机制”了
Java GC垃圾回收几乎是面试必问的JVM问题之一,本篇文章带领大家了解Java GC的底层原理,图文并茂,突破学习及面试瓶颈. 楔子-JVM内存结构补充 在上篇<JVM之内存结构详解> ...
- PE 文件格式详解
PE文件 是微软 Win32 环境下可执行文件的标准格式. 所谓的可执行文件并不仅仅是常见的 EXE 文件,DLL,SYS,VXD 等文件也都属于 PE 格式. |-------> DOS_MZ ...
- mvc请求管道(一)
一.前言 在平常做后台开发的时候,经常会说到请求管道,很多开发者都知道这个,也能说几句,可能没法详细的去介绍,今天就来详细的说一下这个. 二.到达IIS之前 请看下面这个流程图.从用户打开浏览器到请求 ...