*HDU1053 哈夫曼编码
Entropy
Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 65536/32768 K (Java/Others)
Total Submission(s): 5972 Accepted Submission(s): 2507
entropy encoder is a data encoding method that achieves lossless data
compression by encoding a message with “wasted” or “extra” information
removed. In other words, entropy encoding removes information that was
not necessary in the first place to accurately encode the message. A
high degree of entropy implies a message with a great deal of wasted
information; english text encoded in ASCII is an example of a message
type that has very high entropy. Already compressed messages, such as
JPEG graphics or ZIP archives, have very little entropy and do not
benefit from further attempts at entropy encoding.
English text
encoded in ASCII has a high degree of entropy because all characters are
encoded using the same number of bits, eight. It is a known fact that
the letters E, L, N, R, S and T occur at a considerably higher frequency
than do most other letters in english text. If a way could be found to
encode just these letters with four bits, then the new encoding would be
smaller, would contain all the original information, and would have
less entropy. ASCII uses a fixed number of bits for a reason, however:
it’s easy, since one is always dealing with a fixed number of bits to
represent each possible glyph or character. How would an encoding scheme
that used four bits for the above letters be able to distinguish
between the four-bit codes and eight-bit codes? This seemingly difficult
problem is solved using what is known as a “prefix-free
variable-length” encoding.
In such an encoding, any number of
bits can be used to represent any glyph, and glyphs not present in the
message are simply not encoded. However, in order to be able to recover
the information, no bit pattern that encodes a glyph is allowed to be
the prefix of any other encoding bit pattern. This allows the encoded
bitstream to be read bit by bit, and whenever a set of bits is
encountered that represents a glyph, that glyph can be decoded. If the
prefix-free constraint was not enforced, then such a decoding would be
impossible.
Consider the text “AAAAABCD”. Using ASCII, encoding
this would require 64 bits. If, instead, we encode “A” with the bit
pattern “00”, “B” with “01”, “C” with “10”, and “D” with “11” then we
can encode this text in only 16 bits; the resulting bit pattern would be
“0000000000011011”. This is still a fixed-length encoding, however;
we’re using two bits per glyph instead of eight. Since the glyph “A”
occurs with greater frequency, could we do better by encoding it with
fewer bits? In fact we can, but in order to maintain a prefix-free
encoding, some of the other bit patterns will become longer than two
bits. An optimal encoding is to encode “A” with “0”, “B” with “10”, “C”
with “110”, and “D” with “111”. (This is clearly not the only optimal
encoding, as it is obvious that the encodings for B, C and D could be
interchanged freely for any given encoding without increasing the size
of the final encoded message.) Using this encoding, the message encodes
in only 13 bits to “0000010110111”, a compression ratio of 4.9 to 1
(that is, each bit in the final encoded message represents as much
information as did 4.9 bits in the original encoding). Read through this
bit pattern from left to right and you’ll see that the prefix-free
encoding makes it simple to decode this into the original text even
though the codes have varying bit lengths.
As a second example,
consider the text “THE CAT IN THE HAT”. In this text, the letter “T” and
the space character both occur with the highest frequency, so they will
clearly have the shortest encoding bit patterns in an optimal encoding.
The letters “C”, “I’ and “N” only occur once, however, so they will
have the longest codes.
There are many possible sets of
prefix-free variable-length bit patterns that would yield the optimal
encoding, that is, that would allow the text to be encoded in the fewest
number of bits. One such optimal encoding is to encode spaces with
“00”, “A” with “100”, “C” with “1110”, “E” with “1111”, “H” with “110”,
“I” with “1010”, “N” with “1011” and “T” with “01”. The optimal encoding
therefore requires only 51 bits compared to the 144 that would be
necessary to encode the message with 8-bit ASCII encoding, a compression
ratio of 2.8 to 1.
input file will contain a list of text strings, one per line. The text
strings will consist only of uppercase alphanumeric characters and
underscores (which are used in place of spaces). The end of the input
will be signalled by a line containing only the word “END” as the text
string. This line should not be processed.
each text string in the input, output the length in bits of the 8-bit
ASCII encoding, the length in bits of an optimal prefix-free
variable-length encoding, and the compression ratio accurate to one
decimal point.
END
//搞不懂。。。算出每个字符出现的次数用优先队列从小到大存节点,每次取队列中两个最小的加起来再存入队列至队列中只有一个节点。
#include<iostream>
#include<cstdio>
#include<cstring>
#include<queue>
#include<functional>
#include<vector>
using namespace std;
int a[];
char s[];
int ans;
int main()
{
while(scanf("%s",s))
{
if(!strcmp(s,"END"))
break;
priority_queue<int,vector<int>,greater<int> >q;
int len=strlen(s);
memset(a,,sizeof(a));
for(int i=;i<len;i++)
{
if(s[i]=='_')
a[]++;
else a[s[i]-'A'+]++;
}
for(int i=;i<=;i++)
if(a[i]!=)
q.push(a[i]);
if(q.size()==)
ans=len;
else
{
ans=;
while(q.size()!=)
{
int x=q.top();
q.pop();
int y=q.top();
q.pop();
ans=ans+x+y;
q.push(x+y);
}
}
printf("%d %d %.1lf\n",*len,ans,(double)*len/(double)ans);
}
return ;
}
*HDU1053 哈夫曼编码的更多相关文章
- 哈夫曼(huffman)树和哈夫曼编码
哈夫曼树 哈夫曼树也叫最优二叉树(哈夫曼树) 问题:什么是哈夫曼树? 例:将学生的百分制成绩转换为五分制成绩:≥90 分: A,80-89分: B,70-79分: C,60-69分: D,<60 ...
- (转载)哈夫曼编码(Huffman)
转载自:click here 1.哈夫曼编码的起源: 哈夫曼编码是 1952 年由 David A. Huffman 提出的一种无损数据压缩的编码算法.哈夫曼编码先统计出每种字母在字符串里出现的频率, ...
- 数据结构图文解析之:哈夫曼树与哈夫曼编码详解及C++模板实现
0. 数据结构图文解析系列 数据结构系列文章 数据结构图文解析之:数组.单链表.双链表介绍及C++模板实现 数据结构图文解析之:栈的简介及C++模板实现 数据结构图文解析之:队列详解与C++模板实现 ...
- HDU2527 哈夫曼编码
Safe Or Unsafe Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others)To ...
- YTU 3027: 哈夫曼编码
原文链接:https://www.dreamwings.cn/ytu3027/2899.html 3027: 哈夫曼编码 时间限制: 1 Sec 内存限制: 128 MB 提交: 2 解决: 2 ...
- 使用F#来实现哈夫曼编码吧
最近算法课要求实现哈夫曼编码,由于前面的问题都是使用了F#来解决,偶然换成C#也十分古怪,报告也不好看,风格差太多.一开始是打算把C#版本的哈夫曼编码换用F#来写,结果写到一半就觉得日了狗了...毕竟 ...
- 赫夫曼\哈夫曼\霍夫曼编码 (Huffman Tree)
哈夫曼树 给定n个权值作为n的叶子结点,构造一棵二叉树,若带权路径长度达到最小,称这样的二叉树为最优二叉树,也称为哈夫曼树(Huffman Tree).哈夫曼树是带权路径长度最短的树,权值较大的结点离 ...
- hdu2527哈夫曼编码
/* Safe Or Unsafe Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others) T ...
- [数据结构与算法]哈夫曼(Huffman)树与哈夫曼编码
声明:原创作品,转载时请注明文章来自SAP师太技术博客( 博/客/园www.cnblogs.com):www.cnblogs.com/jiangzhengjun,并以超链接形式标明文章原始出处,否则将 ...
随机推荐
- tyvj1014 乘法游戏
描述 乘法游戏是在一行牌上进行的.每一张牌包括了一个正整数.在每一个移动中,玩家拿出一张牌,得分是用它的数字乘以它左边和右边的数,所以不允许拿第1张和最后1张牌.最后一次移动后,这里只剩下两张牌. ...
- linux c 笔记-2 Hello World & main函数
按照惯例撸一个hello_world.c #include <stdio.h> int main(int argc, char * argv[]) { printf("hello ...
- 一个简单的servlet的demo
javaweb 的应用我们需要参考javaee api 查找servlet接口 javax.servletInterface Servlet All Known Subinterfaces: Ht ...
- springMVC接收参数的几种方式
Spring3 MVC请求参数获取的几种方法 一. 通过@PathVariabl获取路径中的参数 @RequestMapping(value="user/{id}/{name}&q ...
- nginx和rewrite的配置
测试ok 具体参见 http://www.ccvita.com/348.html
- 无法打开之前cuda的vs项目,打开之后变灰色
解决办法: 打开convolution_vs2010.vcxproj文件,将之前cuda 5.5全部改成cuda7.5. 就可以打开了.
- sql面试题一 学生成绩
sql面试题一 学生成绩 原帖链接:http://topic.csdn.net/u/20081020/15/1ABF54D0-F401-42AB-A75E-DF90027CEBA0.html 表架 ...
- JAVA中static关键字
用法:是一个修饰符,用于修饰成员(成员变量,成员函数),不能用于修饰局部变量!被static修饰后,就多了一种调用方式,除了可以被对象调用外,还可以直接被类名调用,写法格式是:类名.静态成员.优点:被 ...
- 3.Java异常进阶
3.JAVA异常进阶 1.Run函数中抛出的异常 1.run函数不会抛出异常 2.run函数的异常会交给UncaughtExceptionhandler处理 3.默认的UncaughtExceptio ...
- 第二十七篇:SOUI中控件属性查询方法
SOUI项目的SVN根目录下有一个doc目录,下面有一份控件属性表.包含了大部分控件的大部分属性,不过也不一定完全准确.最保险的办法还是查源代码. SOUI对象包含控件及ISkinObj等从SObje ...