一、对于二次排序案例部分理解

  1. 1. 分析需求(首先对第一个字段排序,然后在对第二个字段排序)
  2. 杂乱的原始数据 排序完成的数据
  3. a,1 a,1
  4. b,1 a,2
  5. a,2 [排序] a,100
  6. b,6 ===> b,-3
  7. c,2 b,-2
  8. b,-2 b,1
  9. a,100 b,6
  10. b,-3 c,-7
  11. c,-7 c,2
  12. 2. 分析[MapRedice过程]
  13. 1> 分析数据传入通过input()传入map()
  14. 2> map()对数据进行层层过滤,以达到我们想要的数据源,
  15. 3> 过滤方法中可添加自定义计数器
  16. 4> 过滤后写入context,转入shuffle阶段
  17. 5> 可以说大部分shuffle阶段是map()端的shuffle
  18. 6> 具体shullfe中,数据经过默认分区(hashPartitioner),而默认分区规则是获取
  19. (key.getHashCode() & Integer.MAX_VALUE)%numReudeceTasks;当然默认reduce数目就一个,
  20. reduce输出的文件也就一个,我是这样认为的,经过输出测试,就算你设置了自定义的分区,但你的partition数目
  21. 并没设置,仍然走默认分区
  22. 7> 分区之后对是分区的一个排序,再对分区中的数据进行排序,排序规则按照key排序,我们可以自定义数据类型对其
  23. 设置排序规则,比如二次排序,可以自定义一个组合的key,在组合key中定义根据第一个字段排序,如果第一个字段
  24. 相同,那么再进行对第二个字段排序,以达到二次排序的目的,在分区排序后进入分组阶段也是默认按照key分的,
  25. 分组需要实现RawComparator
  26. 8> 分组之后是merge个并归排序然后进入reduce,其中分组决定数据进入某个reduce,而分区决定了reduce阶段生成
  27. 文件的数目,分组算是shuffle阶段对程序运行的一个优化吧我是这么理解的
  28. 3. 分析[二次排序]
  29. 1> 从上面的数据可以看出,我们可以自定义一个数据类型,来存放第一个和第二个字段,然后自定义一个比较器来
  30. 说明排序规则按照key中的第一个字段进行排序,这里涉及到自定义数据需要实现WritableComparable也可以
  31. 分别继承WritableComparable,反正越方便越好
  32. 2> 接下来看看分区操作,该例只生成一个排好序的文件,不用自定义分区,自定义分区后也不会走该类,自定义分区需要
  33. 继承Partitioner,注意是继承,我们自己要重写分区规则
  34. 3> 然后是分组操作,分组为优化考虑还是有必要的,我们设计分组规则为按照自定义数据类型的第一个字段进行分组,
  35. 分组需要实现RawComparator
  36. 4> 考虑哪里还需要优化,根据数据源的数据量,字段是否必在,长度情况,
  37. 类型情况,是否使用combine与自定义压缩类,数值为负数等,在比较器中既然定义了根据第二个字段比较,我想也
  38. 没必要加个大数减个大数
  39.   效果展示:
  40.   数据源 map()后 shuffle阶段后 reduce()后
  41.   a,1 a#1,1 a#1 [1,2,100] a 1
  42.   b,1 b#1,1 b#-3 [-3,-2,1,6] a 2
  43.   a,2 a#2,2 c#-7 [-7,2] a 100
  44.   b,6 b#6,6 b -3
  45.   c,2 c#2,2 b -2
  46.   b,-2 b#-2,-2 b 1
  47.   a,100 a#100,100 b 6
  48.   b,-3 b#-3,-3 c -7
  49.   c,-7 c#-7,-7 c 2

二、二次排序示例代码

  1. SSortMr.java ## 主类
  2. ============
  3. package com.bigdata_senior.SSortMr;
  4. import java.io.IOException;
  5. import org.apache.hadoop.conf.Configuration;
  6. import org.apache.hadoop.fs.Path;
  7. import org.apache.hadoop.io.LongWritable;
  8. import org.apache.hadoop.io.Text;
  9. import org.apache.hadoop.mapreduce.Job;
  10. import org.apache.hadoop.mapreduce.Mapper;
  11. import org.apache.hadoop.mapreduce.Reducer;
  12. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  13. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  14.  
  15. public class SSortMr {
  16.  
  17. //Mapper Class
  18. private static class SSortMapper
  19. extends Mapper<LongWritable, Text, SecondaryWritable, LongWritable>{
  20. private SecondaryWritable mapOutKey = new SecondaryWritable();
  21. private LongWritable mapOutValue = new LongWritable();
  22. @Override
  23. public void map(LongWritable key, Text value, Context context)
  24. throws IOException, InterruptedException {
  25.  
  26. String lineValue = value.toString();
  27. String [] strValue = lineValue.split(",");
  28. mapOutKey.set(strValue[0],Integer.valueOf(strValue[1]));
  29. mapOutValue.set(Integer.valueOf(strValue[1]));
  30. context.write(mapOutKey, mapOutValue);
  31. System.out.println("key-->"+mapOutKey+" value-->"+mapOutValue);
  32. }
  33. }
  34.  
  35. //Reduce Class
  36. private static class SSortReduce
  37. extends Reducer<SecondaryWritable, LongWritable, Text, LongWritable>{
  38. private Text reduceOutKey = new Text();
  39. @Override
  40. public void reduce(SecondaryWritable key, Iterable<LongWritable> values,Context context)
  41. throws IOException, InterruptedException {
  42.  
  43. for(LongWritable value : values){
  44. reduceOutKey.set(key.getFirst()+"#"+key.getSecond());
  45. context.write(reduceOutKey, value);
  46. }
  47. }
  48. }
  49.  
  50. //Driver
  51. public int run(String[] args) throws Exception {
  52.  
  53. Configuration configuration = new Configuration();
  54. Job job = Job.getInstance(configuration, this.getClass().getSimpleName());
  55. job.setJarByClass(this.getClass());
  56. //job.setNumReduceTasks(3);
  57.  
  58. //input
  59. Path inPath = new Path(args[0]);
  60. FileInputFormat.addInputPath(job,inPath);
  61.  
  62. //output
  63. Path outPath = new Path(args[1]);
  64. FileOutputFormat.setOutputPath(job, outPath);
  65.  
  66. //mapper
  67. job.setMapperClass(SSortMapper.class);
  68. job.setMapOutputKeyClass(SecondaryWritable.class);
  69. job.setMapOutputValueClass(LongWritable.class);
  70.  
  71. //partitioner
  72. //job.setPartitionerClass(SecondaryPartionerCLass.class);
  73.  
  74. //group
  75. job.setGroupingComparatorClass(SecondaryGroupClass.class);
  76.  
  77. //Reduce
  78. job.setReducerClass(SSortReduce.class);
  79. job.setOutputKeyClass(Text.class);
  80. job.setOutputValueClass(LongWritable.class);
  81.  
  82. //submit job
  83. boolean isSuccess = job.waitForCompletion(true);
  84.  
  85. return isSuccess ? 0 : 1;
  86. }
  87.  
  88. public static void main(String[] args) throws Exception {
  89.  
  90. args = new String[]{
  91. "hdfs://hadoop09-linux-01.ibeifeng.com:8020/user/liuwl/tmp/sortmr/input",
  92. "hdfs://hadoop09-linux-01.ibeifeng.com:8020/user/liuwl/tmp/sortmr/output13"
  93. };
  94. //run job
  95. int status = new SSortMr().run(args);
  96. System.exit(status);
  97. }
  98. }
  1. SecondaryWritable.java ## 自定义数据类型
  2. ======================
  3. package com.bigdata_senior.SSortMr;
  4.  
  5. import java.io.DataInput;
  6. import java.io.DataOutput;
  7. import java.io.IOException;
  8.  
  9. import org.apache.hadoop.io.WritableComparable;
  10.  
  11. public class SecondaryWritable implements WritableComparable<SecondaryWritable> {
  12.  
  13. private String first;
  14. private int second;
  15.  
  16. public SecondaryWritable() {}
  17.  
  18. public SecondaryWritable(String first,int second){
  19. this.set(first, second);
  20. }
  21.  
  22. public void set(String fist,int second){
  23. this.first = fist;
  24. this.second = second;
  25. }
  26.  
  27. public String getFirst() {
  28. return first;
  29. }
  30.  
  31. public void setFirst(String first) {
  32. this.first = first;
  33. }
  34.  
  35. public int getSecond() {
  36. return second ;
  37. }
  38.  
  39. public void setSecond(int second) {
  40. this.second = second ;
  41. }
  42.  
  43. @Override
  44. public void write(DataOutput out) throws IOException {
  45.  
  46. out.writeUTF(this.first);
  47. out.writeInt(this.second);
  48. }
  49.  
  50. @Override
  51. public void readFields(DataInput in) throws IOException {
  52.  
  53. this.first = in.readUTF();
  54. this.second = in.readInt();
  55. }
  56.  
  57. @Override
  58. public int compareTo(SecondaryWritable o) {
  59.  
  60. int comp = this.first.compareTo(o.first);
  61. if(0 != comp){
  62. return comp;
  63. }
  64. return Integer.valueOf(this.second).compareTo(Integer.valueOf(o.second));
  65. }
  66.  
  67. @Override
  68. public String toString() {
  69. return first + "#" + second;
  70. }
  71.  
  72. @Override
  73. public int hashCode() {
  74. final int prime = 31;
  75. int result = 1;
  76. result = prime * result + ((first == null) ? 0 : first.hashCode());
  77. result = prime * result + second;
  78. return result;
  79. }
  80.  
  81. @Override
  82. public boolean equals(Object obj) {
  83. if (this == obj)
  84. return true;
  85. if (obj == null)
  86. return false;
  87. if (getClass() != obj.getClass())
  88. return false;
  89. SecondaryWritable other = (SecondaryWritable) obj;
  90. if (first == null) {
  91. if (other.first != null)
  92. return false;
  93. } else if (!first.equals(other.first))
  94. return false;
  95. if (second != other.second)
  96. return false;
  97. return true;
  98. }
  99. }
  1. SecondaryPartionerCLass.java ## 自定义分区规则(已注释不用)
  2. ============================
  3. package com.bigdata_senior.SSortMr;
  4.  
  5. import org.apache.hadoop.io.LongWritable;
  6. import org.apache.hadoop.mapreduce.Partitioner;
  7.  
  8. public class SecondaryPartionerCLass extends Partitioner<SecondaryWritable, LongWritable> {
  9.  
  10. @Override
  11. public int getPartition(SecondaryWritable key, LongWritable value,
  12. int numPartitions) {
  13. return (key.getFirst().hashCode() & Integer.MAX_VALUE) % numPartitions;
  14. }
  15. }
  1. SecondaryGroupClass.java ## 自定义分组规则
  2. ========================
  3. package com.bigdata_senior.SSortMr;
  4.  
  5. import java.util.Arrays;
  6.  
  7. import org.apache.hadoop.io.RawComparator;
  8. import org.apache.hadoop.io.WritableComparator;
  9.  
  10. public class SecondaryGroupClass implements RawComparator<SecondaryWritable> {
  11.  
  12. @Override
  13. public int compare(SecondaryWritable o1, SecondaryWritable o2) {
  14. System.out.println("o1: "+o1.toString()+" o2: "+o2.toString());
  15. return o1.getFirst().compareTo(o2.getFirst());
  16. }
  17.  
  18. @Override
  19. public int compare(byte[] b1, int s1, int l1, byte[] b2, int s2, int l2) {
  20. System.out.println("b1: "+Arrays.toString(b1)+" b2: "+Arrays.toString(b2));
  21. return WritableComparator.compareBytes(b1, 0, l1-4, b2, 0, l2-4);
  22. }
  23. }
  1. 另外还可以: ## 但这个对于小数据可用,大数据将非常消耗资源
  2. SSortMr2.java
  3. =============
  4. package com.bigdata_senior.SSortMr2;
  5. import java.io.IOException;
  6. import java.util.ArrayList;
  7. import java.util.Arrays;
  8. import java.util.Collection;
  9. import java.util.Collections;
  10. import java.util.Iterator;
  11. import java.util.List;
  12. import org.apache.hadoop.conf.Configuration;
  13. import org.apache.hadoop.fs.Path;
  14. import org.apache.hadoop.io.LongWritable;
  15. import org.apache.hadoop.io.Text;
  16. import org.apache.hadoop.mapreduce.Job;
  17. import org.apache.hadoop.mapreduce.Mapper;
  18. import org.apache.hadoop.mapreduce.Reducer;
  19. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  20. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  21.  
  22. public class SSortMr2 {
  23.  
  24. //Mapper Class
  25. private static class SSortMapper extends Mapper<LongWritable, Text, Text, LongWritable>{
  26. private Text mapOutKey = new Text();
  27. private LongWritable mapOutValue = new LongWritable();
  28. @Override
  29. public void map(LongWritable key, Text value, Context context)
  30. throws IOException, InterruptedException {
  31.  
  32. String lineValue = value.toString();
  33. String [] strValue = lineValue.split(",");
  34. mapOutKey.set(strValue[0]);
  35. mapOutValue.set(Integer.valueOf(strValue[1]));
  36. context.write(mapOutKey, mapOutValue);
  37. System.out.println("key-->"+mapOutKey+" value-->"+mapOutValue);
  38. }
  39. }
  40.  
  41. //Reduce Class
  42. private static class SSortReduce extends Reducer<Text, LongWritable, Text, Long>{
  43. @Override
  44. public void reduce(Text key, Iterable<LongWritable> values,Context context)
  45. throws IOException, InterruptedException {
  46.  
  47. List<Long> longList = new ArrayList<Long>();
  48. for(LongWritable value: values){
  49. longList.add(value.get());
  50. }
  51. Collections.sort(longList);
  52. for(Long value : longList){
  53. System.out.println("key--> "+key+" value--> "+value);
  54. context.write(key, value);
  55. }
  56. }
  57. }
  58.  
  59. //Driver
  60. public int run(String[] args) throws Exception {
  61.  
  62. Configuration configuration = new Configuration();
  63. Job job = Job.getInstance(configuration, this.getClass().getSimpleName());
  64. job.setJarByClass(this.getClass());
  65.  
  66. //input
  67. Path inPath = new Path(args[0]);
  68. FileInputFormat.addInputPath(job,inPath);
  69.  
  70. //output
  71. Path outPath = new Path(args[1]);
  72. FileOutputFormat.setOutputPath(job, outPath);
  73.  
  74. //mapper
  75. job.setMapperClass(SSortMapper.class);
  76. job.setMapOutputKeyClass(Text.class);
  77. job.setMapOutputValueClass(LongWritable.class);
  78.  
  79. //Reduce
  80. job.setReducerClass(SSortReduce.class);
  81. job.setOutputKeyClass(Text.class);
  82. job.setOutputValueClass(Long.class);
  83.  
  84. //submit job
  85. boolean isSuccess = job.waitForCompletion(true);
  86.  
  87. return isSuccess ? 0 : 1;
  88. }
  89.  
  90. public static void main(String[] args) throws Exception {
  91.  
  92. args = new String[]{
  93. "hdfs://hadoop09-linux-01.ibeifeng.com:8020/user/liuwl/tmp/sortmr/input",
  94. "hdfs://hadoop09-linux-01.ibeifeng.com:8020/user/liuwl/tmp/sortmr/output22"
  95. };
  96. //run job
  97. int status = new SSortMr2().run(args);
  98. System.exit(status);
  99. }
  100. }

三、MapReduce join简单理解

  1. 1. join(组合)
  2. 2. 即两张或两张以上的数据源数据组合输出
  3. 3. 由于学了hive,感觉MapReducejoin不再是重点,因为在MapReduce处理
  4. 1> 为止join表数目
  5. 2> 操作繁琐,过滤多样,可能会考虑不全
  6. 3> 资源消耗较重
  7. 4. MapReducejoin大致就是将两张表加载进内存,在数据混淆情况下,为其设置自定义数据类型以区分两张表,
  8. 然后在reudece()中分别获取表并指定输出结果,当然处理join的方式还有很多,比如setup()加载一张表存进集合处理

四、MapReduce join代码示例

  1. JoinMr.java ## 主类
  2. ===========
  3. package com.bigdata_senior.joinMr;
  4. import java.io.IOException;
  5. import java.util.ArrayList;
  6. import java.util.List;
  7.  
  8. import org.apache.hadoop.conf.Configuration;
  9. import org.apache.hadoop.fs.Path;
  10. import org.apache.hadoop.io.LongWritable;
  11. import org.apache.hadoop.io.NullWritable;
  12. import org.apache.hadoop.io.Text;
  13. import org.apache.hadoop.mapreduce.Job;
  14. import org.apache.hadoop.mapreduce.Mapper;
  15. import org.apache.hadoop.mapreduce.Reducer;
  16. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  17. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  18.  
  19. public class JoinMr {
  20.  
  21. //Mapper Class
  22. private static class WordCountMapper extends
  23. Mapper<LongWritable, Text, LongWritable, JoinWritable>{
  24.  
  25. private LongWritable mapoutputkey = new LongWritable();
  26. private JoinWritable mapoutputvalue = new JoinWritable();
  27.  
  28. @Override
  29. protected void setup(Context context) throws IOException,
  30. InterruptedException {
  31. }
  32.  
  33. @Override
  34. public void map(LongWritable key, Text value, Context context)
  35. throws IOException, InterruptedException {
  36.  
  37. String lineValue = value.toString();
  38. String [] strValue = lineValue.split(",");
  39.  
  40. int length = strValue.length;
  41. if(3 != length && 4 != length){
  42. return;
  43. }
  44.  
  45. //get cid
  46. Long cid = Long.valueOf(strValue[0]);
  47. //get cname
  48. String cname = strValue[1];
  49. //set customer
  50. if(3 == length){
  51. String phone = strValue[2];
  52. mapoutputkey.set(cid);
  53. mapoutputvalue.set("customer", cname + "," + phone);
  54. }
  55.  
  56. //set order
  57. if(4 == length){
  58. String price = strValue[2];
  59. String date = strValue[3];
  60. mapoutputkey.set(cid);
  61. mapoutputvalue.set("order", cname +","+price +","+ date);
  62. }
  63. context.write(mapoutputkey, mapoutputvalue);
  64. }
  65. }
  66.  
  67. //Reduce Class
  68. private static class WordCountReduce extends
  69. Reducer<LongWritable, JoinWritable, NullWritable, Text>{
  70.  
  71. private Text outputValue = new Text();
  72. @Override
  73. public void reduce(LongWritable key, Iterable<JoinWritable> values,Context context)
  74. throws IOException, InterruptedException {
  75.  
  76. String customerInfo = null;
  77. List<String> orderList = new ArrayList<String>();
  78. for(JoinWritable value : values){
  79. if("customer".equals(value.getTag())){
  80. customerInfo = value.getData();
  81. System.out.println(customerInfo);
  82. }else if("order".equals(value.getTag())){
  83. orderList.add(value.getData());
  84. }
  85. }
  86. for(String order: orderList){
  87. outputValue.set(key.get()+","+customerInfo+","+order);
  88. context.write(NullWritable.get(), outputValue);
  89. }
  90. }
  91. }
  92.  
  93. //Driver
  94. public int run(String[] args) throws Exception {
  95.  
  96. Configuration configuration = new Configuration();
  97. Job job = Job.getInstance(configuration, this.getClass().getSimpleName());
  98. job.setJarByClass(this.getClass());
  99.  
  100. //input
  101. Path inPath = new Path(args[0]);
  102. FileInputFormat.addInputPath(job,inPath);
  103.  
  104. //output
  105. Path outPath = new Path(args[1]);
  106. FileOutputFormat.setOutputPath(job, outPath);
  107.  
  108. //mapper
  109. job.setMapperClass(WordCountMapper.class);
  110. job.setMapOutputKeyClass(LongWritable.class);
  111. job.setMapOutputValueClass(JoinWritable.class);
  112.  
  113. //Reduce
  114. job.setReducerClass(WordCountReduce.class);
  115. job.setOutputKeyClass(NullWritable.class);
  116. job.setOutputValueClass(Text.class);
  117.  
  118. //submit job
  119. boolean isSuccess = job.waitForCompletion(true);
  120.  
  121. return isSuccess ? 0 : 1;
  122. }
  123.  
  124. public static void main(String[] args) throws Exception {
  125.  
  126. args = new String[]{
  127. "hdfs://hadoop09-linux-01.ibeifeng.com:8020/user/liuwl/tmp/join/input",
  128. "hdfs://hadoop09-linux-01.ibeifeng.com:8020/user/liuwl/tmp/join/output2"
  129. };
  130. //run job
  131. int status = new JoinMr().run(args);
  132. System.exit(status);
  133. }
  134. }
  1. JoinWritable.java ## 自定义数据类型
  2. package com.bigdata_senior.joinMr;
  3.  
  4. import java.io.DataInput;
  5. import java.io.DataOutput;
  6. import java.io.IOException;
  7. import org.apache.hadoop.io.Writable;
  8.  
  9. public class JoinWritable implements Writable {
  10.  
  11. private String tag;
  12. private String data;
  13.  
  14. public JoinWritable(){}
  15.  
  16. public JoinWritable(String tag,String data){
  17. this.set(tag, data);
  18. }
  19. public void set(String tag,String data){
  20. this.setTag(tag);
  21. this.setData(data);
  22. }
  23.  
  24. public String getTag() {
  25. return tag;
  26. }
  27.  
  28. public void setTag(String tag) {
  29. this.tag = tag;
  30. }
  31.  
  32. public String getData() {
  33. return data;
  34. }
  35.  
  36. public void setData(String data) {
  37. this.data = data;
  38. }
  39.  
  40. @Override
  41. public void write(DataOutput out) throws IOException {
  42.  
  43. out.writeUTF(this.getTag());
  44. out.writeUTF(this.getData());
  45. }
  46.  
  47. @Override
  48. public void readFields(DataInput in) throws IOException {
  49. this.setTag(in.readUTF());
  50. this.setData(in.readUTF());
  51. }
  52.  
  53. @Override
  54. public int hashCode() {
  55. final int prime = 31;
  56. int result = 1;
  57. result = prime * result + ((data == null) ? 0 : data.hashCode());
  58. result = prime * result + ((tag == null) ? 0 : tag.hashCode());
  59. return result;
  60. }
  61.  
  62. @Override
  63. public boolean equals(Object obj) {
  64. if (this == obj)
  65. return true;
  66. if (obj == null)
  67. return false;
  68. if (getClass() != obj.getClass())
  69. return false;
  70. JoinWritable other = (JoinWritable) obj;
  71. if (data == null) {
  72. if (other.data != null)
  73. return false;
  74. } else if (!data.equals(other.data))
  75. return false;
  76. if (tag == null) {
  77. if (other.tag != null)
  78. return false;
  79. } else if (!tag.equals(other.tag))
  80. return false;
  81. return true;
  82. }
  83.  
  84. @Override
  85. public String toString() {
  86. return tag + "," +data;
  87. }
  88. }

Hadoop.2.x_高级应用_二次排序及MapReduce端join的更多相关文章

  1. 大数据【四】MapReduce(单词计数;二次排序;计数器;join;分布式缓存)

       前言: 根据前面的几篇博客学习,现在可以进行MapReduce学习了.本篇博客首先阐述了MapReduce的概念及使用原理,其次直接从五个实验中实践学习(单词计数,二次排序,计数器,join,分 ...

  2. python 实现Hadoop的partitioner和二次排序

    我们知道,一个典型的Map-Reduce过程包 括:Input->Map->Partition->Reduce->Output. Partition负责把Map任务输出的中间结 ...

  3. 分别使用Hadoop和Spark实现二次排序

    零.序(注意本部分与标题无太大关系,可直接调至第一部分) 既然没用为啥会有序?原因不想再开一篇文章,来抒发点什么感想或者计划了,就在这里写点好了: 前些日子买了几本书,打算学习和研究大数据方面的知识, ...

  4. matlab学习笔记9 高级绘图命令_2 图形的高级控制_视点控制和图形旋转_色图和颜色映像_光照和着色

    一起来学matlab-matlab学习笔记9 高级绘图命令_2 图形的高级控制_视点控制和图形旋转_色图和颜色映像_光照和着色 觉得有用的话,欢迎一起讨论相互学习~Follow Me 参考书籍 < ...

  5. 3、尚硅谷_SSM高级整合_使用ajax操作实现删除的功能

    点击删除的时候,要删除联系人,这里同点击编辑按钮一样给删除按钮添加点击事件的时候不能使用 $(".delete_btn").click(function(){ }); 这种方式,因 ...

  6. 【Big Data】HADOOP集群的配置(二)

    Hadoop集群的配置(二) 摘要: hadoop集群配置系列文档,是笔者在实验室真机环境实验后整理而得.以便随后工作所需,做以知识整理,另则与博客园朋友分享实验成果,因为笔者在学习初期,也遇到不少问 ...

  7. 进击的Python【第五章】:Python的高级应用(二)常用模块

    Python的高级应用(二)常用模块学习 本章学习要点: Python模块的定义 time &datetime模块 random模块 os模块 sys模块 shutil模块 ConfigPar ...

  8. Hadoop Mapreduce分区、分组、二次排序过程详解[转]

    原文地址:Hadoop Mapreduce分区.分组.二次排序过程详解[转]作者: 徐海蛟 教学用途 1.MapReduce中数据流动   (1)最简单的过程:  map - reduce   (2) ...

  9. Hadoop学习笔记: MapReduce二次排序

    本文给出一个实现MapReduce二次排序的例子 package SortTest; import java.io.DataInput; import java.io.DataOutput; impo ...

随机推荐

  1. 10个很棒的学习Android 开发的网站(转)

    看到江湖旅人 写的<10个很棒的学习iOS开发的网站 - 简书>,所以就忍不住写Android 啦,也希望对大家有帮助.我推荐的网站,都是我在学习Android 开发过程中发现的好网站,给 ...

  2. JSON数据解析(GSON方式) (转)

    JSON(JavaScript Object Notation)是一种轻量级的数据交换格式,采用完全独立于语言的文本格式,为Web应用开发提供了一种理想的数据交换格式. 在上一篇博文<Andro ...

  3. Android的Touch事件处理机制

    Android的Touch事件处理机制比较复杂,特别是在考虑了多点触摸以及事件拦截之后. Android的Touch事件处理分3个层面:Activity层,ViewGroup层,View层. 首先说一 ...

  4. ASP.NET中的GridView自带的编辑更新功能

    string ConStr = ConfigurationManager.ConnectionStrings["NorthwindConnectionString"].Connec ...

  5. TStringList 常用操作

    //TStringList 常用方法与属性: var   List: TStringList;   i: Integer; begin   List := TStringList.Create;   ...

  6. The StringFormat property

    As we saw in the previous chapters, the way to manipulate the output of a binding before is shown is ...

  7. The 2015 China Collegiate Programming Contest A. Secrete Master Plan hdu5540

    Secrete Master Plan Time Limit: 3000/1000 MS (Java/Others)    Memory Limit: 65535/65535 K (Java/Othe ...

  8. 【原】iOS学习45之多媒体操作

    1. 音频 1> 音频实现简述 iOS 里面共有四种专门实现播放音频的方式: System Sound Services(系统声音服务) OpenAL(跨平台的开源的音频处理接口) Audio ...

  9. Hadoop Streaming框架使用(一)

      Streaming简介 link:http://www.cnblogs.com/luchen927/archive/2012/01/16/2323448.html Streaming框架允许任何程 ...

  10. topcoder SRM 618 DIV2 WritingWords

    只需要对word遍历一遍即可 int write(string word) { ; ; i < word.length(); ++ i){ cnt+=word[i]-; } return cnt ...