*HDU1053 哈夫曼编码
Entropy
Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 65536/32768 K (Java/Others)
Total Submission(s): 5972 Accepted Submission(s): 2507
entropy encoder is a data encoding method that achieves lossless data
compression by encoding a message with “wasted” or “extra” information
removed. In other words, entropy encoding removes information that was
not necessary in the first place to accurately encode the message. A
high degree of entropy implies a message with a great deal of wasted
information; english text encoded in ASCII is an example of a message
type that has very high entropy. Already compressed messages, such as
JPEG graphics or ZIP archives, have very little entropy and do not
benefit from further attempts at entropy encoding.
English text
encoded in ASCII has a high degree of entropy because all characters are
encoded using the same number of bits, eight. It is a known fact that
the letters E, L, N, R, S and T occur at a considerably higher frequency
than do most other letters in english text. If a way could be found to
encode just these letters with four bits, then the new encoding would be
smaller, would contain all the original information, and would have
less entropy. ASCII uses a fixed number of bits for a reason, however:
it’s easy, since one is always dealing with a fixed number of bits to
represent each possible glyph or character. How would an encoding scheme
that used four bits for the above letters be able to distinguish
between the four-bit codes and eight-bit codes? This seemingly difficult
problem is solved using what is known as a “prefix-free
variable-length” encoding.
In such an encoding, any number of
bits can be used to represent any glyph, and glyphs not present in the
message are simply not encoded. However, in order to be able to recover
the information, no bit pattern that encodes a glyph is allowed to be
the prefix of any other encoding bit pattern. This allows the encoded
bitstream to be read bit by bit, and whenever a set of bits is
encountered that represents a glyph, that glyph can be decoded. If the
prefix-free constraint was not enforced, then such a decoding would be
impossible.
Consider the text “AAAAABCD”. Using ASCII, encoding
this would require 64 bits. If, instead, we encode “A” with the bit
pattern “00”, “B” with “01”, “C” with “10”, and “D” with “11” then we
can encode this text in only 16 bits; the resulting bit pattern would be
“0000000000011011”. This is still a fixed-length encoding, however;
we’re using two bits per glyph instead of eight. Since the glyph “A”
occurs with greater frequency, could we do better by encoding it with
fewer bits? In fact we can, but in order to maintain a prefix-free
encoding, some of the other bit patterns will become longer than two
bits. An optimal encoding is to encode “A” with “0”, “B” with “10”, “C”
with “110”, and “D” with “111”. (This is clearly not the only optimal
encoding, as it is obvious that the encodings for B, C and D could be
interchanged freely for any given encoding without increasing the size
of the final encoded message.) Using this encoding, the message encodes
in only 13 bits to “0000010110111”, a compression ratio of 4.9 to 1
(that is, each bit in the final encoded message represents as much
information as did 4.9 bits in the original encoding). Read through this
bit pattern from left to right and you’ll see that the prefix-free
encoding makes it simple to decode this into the original text even
though the codes have varying bit lengths.
As a second example,
consider the text “THE CAT IN THE HAT”. In this text, the letter “T” and
the space character both occur with the highest frequency, so they will
clearly have the shortest encoding bit patterns in an optimal encoding.
The letters “C”, “I’ and “N” only occur once, however, so they will
have the longest codes.
There are many possible sets of
prefix-free variable-length bit patterns that would yield the optimal
encoding, that is, that would allow the text to be encoded in the fewest
number of bits. One such optimal encoding is to encode spaces with
“00”, “A” with “100”, “C” with “1110”, “E” with “1111”, “H” with “110”,
“I” with “1010”, “N” with “1011” and “T” with “01”. The optimal encoding
therefore requires only 51 bits compared to the 144 that would be
necessary to encode the message with 8-bit ASCII encoding, a compression
ratio of 2.8 to 1.
input file will contain a list of text strings, one per line. The text
strings will consist only of uppercase alphanumeric characters and
underscores (which are used in place of spaces). The end of the input
will be signalled by a line containing only the word “END” as the text
string. This line should not be processed.
each text string in the input, output the length in bits of the 8-bit
ASCII encoding, the length in bits of an optimal prefix-free
variable-length encoding, and the compression ratio accurate to one
decimal point.
END
//搞不懂。。。算出每个字符出现的次数用优先队列从小到大存节点,每次取队列中两个最小的加起来再存入队列至队列中只有一个节点。
#include<iostream>
#include<cstdio>
#include<cstring>
#include<queue>
#include<functional>
#include<vector>
using namespace std;
int a[];
char s[];
int ans;
int main()
{
while(scanf("%s",s))
{
if(!strcmp(s,"END"))
break;
priority_queue<int,vector<int>,greater<int> >q;
int len=strlen(s);
memset(a,,sizeof(a));
for(int i=;i<len;i++)
{
if(s[i]=='_')
a[]++;
else a[s[i]-'A'+]++;
}
for(int i=;i<=;i++)
if(a[i]!=)
q.push(a[i]);
if(q.size()==)
ans=len;
else
{
ans=;
while(q.size()!=)
{
int x=q.top();
q.pop();
int y=q.top();
q.pop();
ans=ans+x+y;
q.push(x+y);
}
}
printf("%d %d %.1lf\n",*len,ans,(double)*len/(double)ans);
}
return ;
}
*HDU1053 哈夫曼编码的更多相关文章
- 哈夫曼(huffman)树和哈夫曼编码
哈夫曼树 哈夫曼树也叫最优二叉树(哈夫曼树) 问题:什么是哈夫曼树? 例:将学生的百分制成绩转换为五分制成绩:≥90 分: A,80-89分: B,70-79分: C,60-69分: D,<60 ...
- (转载)哈夫曼编码(Huffman)
转载自:click here 1.哈夫曼编码的起源: 哈夫曼编码是 1952 年由 David A. Huffman 提出的一种无损数据压缩的编码算法.哈夫曼编码先统计出每种字母在字符串里出现的频率, ...
- 数据结构图文解析之:哈夫曼树与哈夫曼编码详解及C++模板实现
0. 数据结构图文解析系列 数据结构系列文章 数据结构图文解析之:数组.单链表.双链表介绍及C++模板实现 数据结构图文解析之:栈的简介及C++模板实现 数据结构图文解析之:队列详解与C++模板实现 ...
- HDU2527 哈夫曼编码
Safe Or Unsafe Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others)To ...
- YTU 3027: 哈夫曼编码
原文链接:https://www.dreamwings.cn/ytu3027/2899.html 3027: 哈夫曼编码 时间限制: 1 Sec 内存限制: 128 MB 提交: 2 解决: 2 ...
- 使用F#来实现哈夫曼编码吧
最近算法课要求实现哈夫曼编码,由于前面的问题都是使用了F#来解决,偶然换成C#也十分古怪,报告也不好看,风格差太多.一开始是打算把C#版本的哈夫曼编码换用F#来写,结果写到一半就觉得日了狗了...毕竟 ...
- 赫夫曼\哈夫曼\霍夫曼编码 (Huffman Tree)
哈夫曼树 给定n个权值作为n的叶子结点,构造一棵二叉树,若带权路径长度达到最小,称这样的二叉树为最优二叉树,也称为哈夫曼树(Huffman Tree).哈夫曼树是带权路径长度最短的树,权值较大的结点离 ...
- hdu2527哈夫曼编码
/* Safe Or Unsafe Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others) T ...
- [数据结构与算法]哈夫曼(Huffman)树与哈夫曼编码
声明:原创作品,转载时请注明文章来自SAP师太技术博客( 博/客/园www.cnblogs.com):www.cnblogs.com/jiangzhengjun,并以超链接形式标明文章原始出处,否则将 ...
随机推荐
- 【bzoj2318】Spoj4060 game with probability Problem
题目描述 Alice和Bob在玩一个游戏.有n个石子在这里,Alice和Bob轮流投掷硬币,如果正面朝上,则从n个石子中取出一个石子,否则不做任何事.取到最后一颗石子的人胜利.Alice在投掷硬币时有 ...
- 《UNIX环境高级编程第三版》apue.h等源码文件的编译安装
操作系统:Ubuntu 12/14 1.下载书中的源代码:点击下载 2.编译 tar -zxvf *.tar.gz cd ./apue.3e make 报错: can,t find -lbsd 解决办 ...
- C和指针 第三章 四种作用域
代码块作用域: 任何位于一对花括号之间是一个代码块,代码块内声明的标识符具有代码块作用域,嵌套代码块内,内部变量会屏蔽外部相同标示的标示符,非嵌套代码块,不会同时处于活动状态所以不会屏蔽. int m ...
- Asp.Net Core--自定义基于策略的授权
翻译如下: 在封面下,角色授权和声明授权使用需求,需求的处理程序和预配置的策略. 这些构建块允许您在代码中表示授权评估,从而允许更丰富,可重用和容易测试的授权结构. 授权策略由一个或多个需求组成,并在 ...
- word20161206
D-channel / D 信道 DACL, discretionary access control list / 自由访问控制列表 daily backup / 每日备份 Data Communi ...
- 【krpano】二维码自动生成插件(源码+介绍+预览)
简介 在krpano生成的全景支持HTML5在手机中展示,而在手机中打开全景网址时不方便,需要输入网址. 最近研究了如何让krpano全景根据自己当前的网址,自动生成二维码,并在电脑浏览时,可以展示出 ...
- quartz集群报错but has failed to stop it. This is very likely to create a memory leak.
quartz集群报错but has failed to stop it. This is very likely to create a memory leak. 在一台配置1核2G内存的阿里云服务器 ...
- C++代码重构——从C global到C++ template
在学数据结构的时候,我常有这样目标--写出能够最大程度复用的代码(算法正确,封装优秀).我常想--如何能在短时间内达成"算法正确,封装优秀"这样的目标.经过一段时间的摸索,我的结论 ...
- day3
程序1: 实现简单的shell sed替换功能 ]new = sys.argv[]file_name = sys.argv[]tmp_file ="tmpfile"open(tmp ...
- 设置UITableView的separatorInset值为UIEdgeInsetsZero,分隔线不最左端显示的问题
一.问题描述 UITableView分割线要显示到最左端 查看UITableView的属性,发现设置separatorInset的值可以自定义分割线的位置. @property (nonatomic) ...