2020你还不会Java8新特性?
Java8(1)新特性介绍及Lambda表达式
这,仅是我学习过程中记录的笔记。确定了一个待研究的主题,对这个主题进行全方面的剖析。笔记是用来方便我回顾与学习的,欢迎大家与我进行交流沟通,共同成长。不止是技术。
前言:
跟大娃一块看,把原来的电脑拿出来放中间看视频用
--- 以后会有的课程 难度
- 深入Java 8 难度1
- 并发与netty 难度3
- JVM 难度4
- node 难度2
- spring精髓 难度1
课程中提到的知识:
前后端分离的开发,是靠node当做中间的
netty,已经成为国内外互联网公司的标配。会涉及底层的源代码的理解。
JVM 涉及的东西比较多。虽然天天用,但是没有深入理解过。各种锁,可见性等。与计算机原理息息相关的。
圣思园主要面对与已经工作的。大部分为一线的开发人员。
课程一定是完整的。由浅入深的。一定要有一种耐心。
对于基础不好的,可以看看以前面授的时候录制的视频。不懂的一定要多查资料。
在讲课过程中的设计思路:4000块钱的收费标准。
jdk8
介绍:Java 8可谓Java语言历史上变化最大的一个版本,其承诺要调整Java编程向着函数式风格迈进,这有助于编写出更为简洁、表达力更强,并且在很多情况下能够利用并行硬件的代码。本门课程将会深入介绍Java 8新特性,学员将会通过本门课程的学习深入掌握Java 8新增特性并能灵活运用在项目中。学习者将学习到如何通过Lambda表达式使用一行代码编写Java函数,如何通过这种功能使用新的Stream API进行编程,如何将冗长的集合处理代码压缩为简单且可读性更好的流程序。学习创建和消费流的机制,分析其性能,能够判断何时应该调用API的并行执行特性。
课程的介绍:
- Java 8新特性介绍
- Lambda表达式介绍
- 使用Lambda表达式代替匿名内部类
- Lambda表达式的作用
- 外部迭代与内部迭代
- Java Lambda表达式语法详解
- 函数式接口详解
- 传递值与传递行为
- Stream深度解析
- Stream API详解
- 串行流与并行流
- Stream构成
- Stream源生成方式
- Stream操作类型
- Stream转换
- Optional详解
- 默认方法详解
- 方法与构造方法引用
- Predicate接口详解
- Function接口详解
- Consumer接口剖析
- Filter介绍
- Map-Reduce讲解、中间操作与终止操作
- 新的Date API分析
拉姆达表达式: 函数式编程。以前的叫做命令式的编程。
使用面向对象语言就是来操作数据,封装继承多态。
函数式编程面向的是行为。好处:代码可读性提高。
开发安卓的时候大量的匿名内部类。
提到的关键字:
kotlin ,JetBrains 。construction 构造
他以前在学习的时候,翻代码。
将要讲解的各个技术的简介、
课程讲解的时候遇到的工具:
Mac , jdk8 ,idea(很多功能是通过插件的形式来实现的)
Java8课程开始
lambda表达式
为什么要使用lambda表示式
- 在Java中无法将函数座位参数传递给一个方法,也无法返回一个函数的方法。
- 在js中,函数的参数是一个函数。返回值是另一个函数的情况是非常常见的。是一门经典的函数式语言。
Java匿名内部类。
匿名内部类的介绍
Gradle的使用。可以完全使用maven的中央仓库。
进行安卓的开发时,gradle已经成为标配了。
lambda:
匿名内部类
my_jButton.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
System.out.println("Button Pressed! ");
}
});
改造后
my_jButton.addActionListener(e -> System.out.println("Button Pressed!"));
lambda表达式的基本结构:
(param1,param2,param3) ->{
}
函数式编程: 一个接口里边只有一个抽象方法。
可以通过lambda表达式来实例。
关于函数式接口:
- 如果一个借口只有一个抽象方法,那么该接口就是一个函数式接口。
- 如果我们在某一个接口上声明了functionalInterface注解,那么编译器就会按照函数是借口的定义来要求改接口。
- 如果某个接口只有一个抽象方法,但是我们并没有给接口声明functionnaleInterface注解,编译器依旧会给改接口看作是函数式接口。
通过实例对函数式接口的理解:
package com.erwa.jdk8;
@FunctionalInterface
interface MyInterface {
void test();
// Multiple non-overriding abstract methods found in interface com.erwa.jdk8.MyInterface
// void te();
//如果一个接口声明一个抽象方法,但是这个方法重写了 object类中的一个方法.
//接口的抽象方法不会加一.所以依然是函数方法.
// Object 类是所有类的父类.
@Override
String toString();
}
public class Test2 {
public void myTest(MyInterface myInterface) {
System.out.println(1);
myInterface.test();
System.out.println(2);
}
public static void main(String[] args) {
Test2 test2 = new Test2();
test2.myTest(() -> {
System.out.println(3);
});
}
}
接口里边从1.8开始也可以有方法实现了。default
默认方法。
default void forEach(Consumer<? super T> action) {
Objects.requireNonNull(action);
for (T t : this) {
action.accept(t);
}
}
* <p>Note that instances of functional interfaces can be created with
* lambda expressions, method references, or constructor reference
lambda表达式的作用:
- lambda表达式为Java添加了确实的函数式编程特性,使我们能将函数当做一等公民看待。
- 在将函数座位一等公民的语言中,lambda表达式的类型是函数。但是在Java中,lambda表达式是对象,他们必须依附于一类特别的对象类型-函数式接口(function interface)
迭代的方式:
- 外部迭代:
- 内部迭代:
- 方法引用:
list.forEach(System.out::println);
接口中可以有默认方法和静态方法。
流: stream
/**
* Returns a sequential {@code Stream} with this collection as its source.
*
* <p>This method should be overridden when the {@link #spliterator()}
* method cannot return a spliterator that is {@code IMMUTABLE},
* {@code CONCURRENT}, or <em>late-binding</em>. (See {@link #spliterator()}
* for details.)
*
* @implSpec
* The default implementation creates a sequential {@code Stream} from the
* collection's {@code Spliterator}.
*
* @return a sequential {@code Stream} over the elements in this collection
* @since 1.8
*/
default Stream<E> stream() {
return StreamSupport.stream(spliterator(), false);
}
关于流方式实现的举例:
public static void main(String[] args) {
//函数式接口的实现方式
MyInterface1 i1 = () -> {};
System.out.println(i1.getClass().getInterfaces()[0]);
MyInterface2 i2 = () -> {};
System.out.println(i2.getClass().getInterfaces()[0]);
// 没有上下文对象,一定会报错的.
// () -> {};
//通过lambda来实现一个线程.
new Thread(() -> System.out.println("hello world")).start();
//有一个list ,将内容中的首字母变大写输出.
List<String> list = Arrays.asList("hello","world","hello world");
//通过lambda来实现所有字母编程大写输出.
// list.forEach(item -> System.out.println(item.toUpperCase()));
//把三个单词放入到新的集合里边.
List<String> list1 = new ArrayList<>(); //diamond语法. 后边的<>不用再放类型
// list.forEach(item -> list1.add(item.toUpperCase()));
// list1.forEach(System.out::println);
//进一步的改进. 流的方式
// list.stream();//单线程
// list.parallelStream(); //多线程
list.stream().map(item -> item.toUpperCase()).forEach(System.out::println);//单线程
list.stream().map(String::toUpperCase).forEach(System.out::println);
//上边的两种方法,都满足函数式接口的方式.
}
lambda表达式的作
- 传递行为,而不仅仅是值
- 提升抽象层次
- API重用性更好
- 更加灵活
lambda基本语法
- (argument) -> (body)
- 如: (arg1,arg2...) -> (body)
Java lambda结构
- 一个Lambda表达式可以有0个或者多个参数
- 参数的类型既可以明确声明,也可以根据上下文来推断。例如:(int a) 与 (a) 效果相同
- 所有参数包含在圆括号内,参数之间用逗号相隔。
- 空圆括号代表参数集为空。
- 当只有一个参数,且类型可推倒时。圆括号()可省略。
- lambda表达式的主体可以包含0条或多条语句。
- 如果lambda表达式的主体只有一条语句,花括号{}可以省略,匿名函数的返回类型与该主体表达式一致。
- 如果lambda表达式的主体包含一条以上语句,则表达式必须包含在花括号中。匿名函数的韩绘制类型与代码块的返回类型一致,诺没有反回则为空。
高阶函数:
如果一个函数接收一个函数作为参数,或者返回一个函数作为返回值,那么该函数就叫做高阶函数.
传递行为的举例:
public static void main(String[] args) {
// 函数的测试
// 传递行为的一种方式.
FunctionTest functionTest = new FunctionTest();
int compute = functionTest.compute(1, value -> 2 * value);
System.out.println(compute);
System.out.println(functionTest.compute(2,value -> 5+ value));
System.out.println(functionTest.compute(3,a -> a * a));
System.out.println(functionTest.convert(5, a -> a + "hello "));
/**
* 高阶函数:
* 如果一个函数接收一个函数作为参数,或者返回一个函数作为返回值,那么该函数就叫做高阶函数.
*/
}
//使用lambda表达式的话,可以直觉预定义行为.用的时候传递.
// 即 函数式编程.
public int compute(int a, Function<Integer, Integer> function) {
return function.apply(a);
}
public String convert(int a, Function<Integer, String> function) {
return function.apply(a);
}
// 之前完成行为的做法. 提前把行为定义好,用的时候调用方法. 如:
public int method1(int a ){
return a * 2 ;
}
Function类中提供的默认方法的讲解:
/**
* Returns a composed function that first applies the {@code before}
* function to its input, and then applies this function to the result.
* If evaluation of either function throws an exception, it is relayed to
* the caller of the composed function.
返回一个组合的函数。对应用完参数后的结果,再次运行apply
*
* @param <V> the type of input to the {@code before} function, and to the
* composed function
* @param before the function to apply before this function is applied
* @return a composed function that first applies the {@code before}
* function and then applies this function
* @throws NullPointerException if before is null
*
* @see #andThen(Function)
*/
default <V> Function<V, R> compose(Function<? super V, ? extends T> before) {
Objects.requireNonNull(before);
return (V v) -> apply(before.apply(v));
}
/**
* Returns a composed function that first applies this function to
* its input, and then applies the {@code after} function to the result.
* If evaluation of either function throws an exception, it is relayed to
* the caller of the composed function.
*
* @param <V> the type of output of the {@code after} function, and of the
* composed function
* @param after the function to apply after this function is applied
* @return a composed function that first applies this function and then
* applies the {@code after} function
* @throws NullPointerException if after is null
*
* @see #compose(Function)
*/
default <V> Function<T, V> andThen(Function<? super R, ? extends V> after) {
Objects.requireNonNull(after);
return (T t) -> after.apply(apply(t));
}
compose : 组合function, 形成两个function的串联。 先执行参数
andThen :先应用当前的函数apply,然后再当做参数再次执行apply。 后执行参数。
identity:输入什么返回什么。
BiFunction: 整合两个函数的方法。
为什么BiFunction不提供 compose ,只提供andThen呢?
因为如果提供compose方法的话,只能获取一个参数的返回值。不合理。
public static void main(String[] args) {
FunctionTest2 functionTest2 = new FunctionTest2();
// compose
// System.out.println(functionTest2.compute(2,a -> a * 3,b -> b * b));
// andThen
// System.out.println(functionTest2.compute2(2,a -> a * 3,b -> b * b));
//BiFunction
// System.out.println(functionTest2.compute3(1,2, (a,b) -> a - b));
// System.out.println(functionTest2.compute3(1,2, (a,b) -> a * b));
// System.out.println(functionTest2.compute3(1,2, (a,b) -> a + b));
// System.out.println(functionTest2.compute3(1,2, (a,b) -> a / b));
//BiFunction andThen
System.out.println(functionTest2.compute4(2,3,(a,b) ->a + b , a -> a * a ));
}
//compose : 组合function, 形成两个function的串联。 先执行参数
//andThen :先应用当前的函数apply,然后再当做参数再次执行apply。 后执行参数
public int compute(int a, Function<Integer, Integer> function1, Function<Integer, Integer> function2) {
return function1.compose(function2).apply(a);
}
public int compute2(int a, Function<Integer, Integer> function1, Function<Integer, Integer> function2) {
return function1.andThen(function2).apply(a);
}
//BiFunction
//求两个参数的和
//先定义一个抽象的行为.
public int compute3(int a, int b, BiFunction<Integer, Integer, Integer> biFunction) {
return biFunction.apply(a, b);
}
//BiFunction andThen
public int compute4(int a, int b, BiFunction<Integer, Integer, Integer> biFunction, Function<Integer, Integer> function) {
return biFunction.andThen(function).apply(a, b);
}
测试 函数式接口的实例:
public class PersonTest {
public static void main(String[] args) {
List<Person> personList = new ArrayList<>();
personList.add(new Person("zhangsan", 20));
personList.add(new Person("zhangsan", 28));
personList.add(new Person("lisi", 30));
personList.add(new Person("wangwu", 40));
PersonTest test = new PersonTest();
//测试 getPersonUsername
// List<Person> personList1 = test.getPersonUsername("zhangsan", personList);
// personList1.forEach(person -> System.out.println(person.getUsername()));
//测试 getPersonByAge
List<Person> personByAge = test.getPersonByAge(25, personList);
personByAge.forEach(person -> System.out.println(person.getAge()));
//测试第三种: 自定义输入行为
List<Person> list = test.getPersonByAge2(20,personList,(age,persons) ->{
return persons.stream().filter(person -> person.getAge() > age).collect(Collectors.toList());
});
list.forEach(person -> System.out.println(person.getAge()));
}
public List<Person> getPersonUsername(String username, List<Person> personList) {
return personList.stream().filter(person -> person.getUsername().equals(username)).collect(Collectors.toList());
}
public List<Person> getPersonByAge(int age, List<Person> personList) {
//使用BiFunction的方式
// BiFunction<Integer, List<Person>, List<Person>> biFunction = (ageOfPerson, list) -> {
// return list.stream().filter(person -> person.getAge() > ageOfPerson ).collect(Collectors.toList());
// };
//变换之后:
BiFunction<Integer, List<Person>, List<Person>> biFunction = (ageOfPerson, list) ->
list.stream().filter(person -> person.getAge() > ageOfPerson ).collect(Collectors.toList());
return biFunction.apply(age, personList);
}
//第三种方式, 动作也让用户自己定义传进来
public List<Person> getPersonByAge2(int age ,List<Person> list,BiFunction<Integer,List<Person>,List<Person>> biFunction){
return biFunction.apply(age, list);
}
}
函数式接口的真谛: 传递的是行为,而不是数据
。
public static void main(String[] args) {
//给定一个输入参数,判断是否满足条件,满足的话返回true
Predicate<String> predicate = p -> p.length() > 5;
System.out.println(predicate.test("nnihaoda"));
}
到现在为止,只是讲解了Java.lang.function包下的几个最重要的,经常使用的方法。
2020年01月01日19:03:33 新的一年开始,记录一下每次学习的时间。
Predicate 谓语。 类中包含的方法:
boolean test(T t);
default Predicate<T> or(Predicate<? super T> other)
default Predicate<T> negate()
default Predicate<T> and(Predicate<? super T> other)
static <T> Predicate<T> isEqual(Object targetRef)
函数式编程,注重传递行为,而不是传递值。
public class PredicateTest2 {
/**
* 测试Predicate中的test方法
*/
public static void main(String[] args) {
List<Integer> list = Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9);
PredicateTest2 predicateTest2 = new PredicateTest2();
//获取大于5的数字
predicateTest2.getAllFunction(list,item -> item > 5);
System.out.println("--------");
//获取所有的偶数
predicateTest2.getAllFunction(list,item -> item % 2 ==0);
System.out.println("--------");
//获取所有的数字
predicateTest2.getAllFunction(list,item -> true);
//获取大于5并且是偶数的
System.out.println("--------");
predicateTest2.testAnd(list,item -> item > 5,item -> item % 2 == 0);
}
public void getAllFunction(List<Integer> list, Predicate<Integer> predicate){
for (Integer integer : list) {
if (predicate.test(integer)) {
System.out.println(integer);
}
}
}
// test or and
public void testAnd(List<Integer> list,Predicate<Integer> integerPredicate,Predicate<Integer> integerPredicate1){
for (Integer integer : list) {
if (integerPredicate.and(integerPredicate1).test(integer)) {
System.out.println(integer);
}
}
}
}
lambda表达式到底给我们带来了什么?原来通过面向对象的时候一个方法只能执行一种功能。现在传递的是行为,一个方法可以多次调用。
逻辑与或非三种的理解.
Supplier类 供应厂商;供应者 (不接收参数,返回结果)
用于什么场合? 工厂
2020年1月3日08:06:28
BinaryOperator 接口
public class SinaryOpertorTest {
public static void main(String[] args) {
SinaryOpertorTest sinaryOpertorTest = new SinaryOpertorTest();
System.out.println(sinaryOpertorTest.compute(1,2,(a,b) -> a+b));
System.out.println("-- -- - - - -- -");
System.out.println(sinaryOpertorTest.getMax("hello123","world",(a,b) -> a.length() - b.length()));
}
private int compute(int a, int b, BinaryOperator<Integer> binaryOperator) {
return binaryOperator.apply(a, b);
}
private String getMax(String a, String b, Comparator<String> comparator) {
return BinaryOperator.maxBy(comparator).apply(a, b);
}
}
Optional final :Optional 不要试图用来当做参数, 一般只用来接收返回值,来规避值的空指针异常的问题。
- empty()
- of()
- ofNullable()
- isPresent()
- get()
- ...
public class OptionalTest {
public static void main(String[] args) {
Optional<String> optional = Optional.of("hello");
//不确定是否为 空是 调用和这个方法
// Optional<String> optional2 = Optional.ofNullable("hello");
// Optional<String> optional1 = Optional.empty();
//过时
// if (optional.isPresent()) {
// System.out.println(optional.get());
// }
optional.ifPresent(item -> System.out.println(item));
System.out.println(optional.orElse("nihao"));
System.out.println(optional.orElseGet(() -> "nihao"));
}
public class OptionalTest2 {
public static void main(String[] args) {
Employee employee = new Employee();
employee.setName("dawa");
Employee employee1 = new Employee();
employee1.setName("erwa");
List<Employee> list = Arrays.asList(employee, employee1);
Company company = new Company("gongsi", list);
Optional<Company> optionalCompany = Optional.ofNullable(company);
System.out.println(optionalCompany.map(company1 -> company1.getList()).orElse(Collections.emptyList()));
}
}
Java8(2)方法引用详解及Stream流介绍
2020你还不会Java8新特性?方法引用详解及Stream 流介绍和操作方式详解(三)
方法引用详解
方法引用: method reference
方法引用实际上是Lambda表达式的一种语法糖
我们可以将方法引用看作是一个「函数指针」,function pointer
方法引用共分为4类:
- 类名::静态方法名
- 引用名(对象名)::实例方法名
- 类名::实例方法名 (比较不好理解,个地方调用的方法只有一个参数,为什么还能正常调用呢? 因为调用比较时,第一个对象来调用getStudentByScore1. 第二个对象来当做参数)
- 构造方法引用: 类名::new
public class StudentTest {
public static void main(String[] args) {
Student student = new Student("zhangsan",10);
Student student1 = new Student("lisi",40);
Student student2 = new Student("wangwu",30);
Student student3 = new Student("zhaoliu",550);
List<Student> list = Arrays.asList(student, student2, student3, student1);
// list.forEach(item -> System.out.println(item.getName()));
//1. 类名 :: 静态方法
// list.sort((studentpar1,studentpar2) -> Student.getStudentByScore(studentpar1,studentpar2));
list.sort(Student::getStudentByScore);
list.forEach(item -> System.out.println(item.getScore()));
System.out.println(" - - - - - - - -- ");
// 2. 引用名(对象名)::实例方法名
StudentMethod studentMethod = new StudentMethod();
list.sort(studentMethod::getStudentBySource);
list.forEach(item -> System.out.println(item.getScore()));
System.out.println(" - - - -- -- ");
// 3. 类名:: 实例方法名
// 这个地方调用的方法只有一个参数,为什么还能正常调用呢? 因为调用比较时,第一个对象来调用getStudentByScore1. 第二个对象来当做参数
list.sort(Student::getStudentByScore1);
list.forEach(item -> System.out.println(item.getScore()));
System.out.println("- - - - - - - -");
// 原生的sort 来举个例子
List<String> list1 = Arrays.asList("da", "era", "a");
// Collections.sort(list1,(city1,city2) -> city1.compareToIgnoreCase(city2));
list1.sort(String::compareToIgnoreCase);
list1.forEach(System.out::println);
System.out.println("- - - - - - -- ");
//4. 构造方法引用
StudentTest studentTest = new StudentTest();
System.out.println(studentTest.getString(String::new));
}
public String getString(Supplier<String> supplier) {
return supplier.get()+"hello";
}
}
默认方法
defaute method
默认方法是指实现此接口时,默认方法已经被默认实现。
引入默认方法最重要的作用就是Java要保证向后兼容。
情景一: 一个类,实现了两个接口。两个接口中有一个相同名字的默认方法。此时会报错,需要从写这个重名的方法
情景二: 约定:实现类的优先级比接口的优先级要高。 一个类,实现一个接口,继承一个实现类。接口和实现类中有一个同名的方法,此时,此类会使用实现类中的方法。
Stream 流介绍和操作方式详解
Collection提供了新的stream()方法。
流不存储值,通过管道的方式获取值。
本质是函数式的,对流的操作会生成一个结果,不过并不会修改底层的数据源,集合可以作为流的底层数据源。
延迟查找,很多流操作(过滤、映射、排序等)等可以延迟实现。
通过流的方式可以更好的操作集合。使用函数式编程更为流程。与lambda表达式搭配使用。
流由3部分构成:
- 源
- 零个或多个中间操作(操作的是谁?操作的是源)
- 终止操作(得到一个结果)
流操作的分类:
- 惰性求值(中间操作)
- 及早求值(种植操作)
使用链式的调用方式sunc as : stream.xxx().yyy().zzz().count(); 没有count的时候前边的三个方法不会被调用。后续会进行举例。
掌握流常用的api,了解底层。
流支持并行化,可以多线程操作。迭代器不支持并行化。
流怎么用?
流的创建方式
- 通过静态方法 : Stream stream = Stream.of();
- 通过数组:Arrays.stream();
- 通过集合创建对象:Stream stream = list.stream;
流的简单应用
public static void main(String[] args) {
IntStream.of(1,2,4,5,6).forEach(System.out::println);
IntStream.range(3, 8).forEach(System.out::println);
IntStream.rangeClosed(3, 8).forEach(System.out::println);
}
举例:将一个数组中的数字都乘以二,然后求和。
public static void main(String[] args) {
List<Integer> list = Arrays.asList(1, 2, 3, 4, 5);
System.out.println(list.stream().map(i -> i*2).reduce(0,Integer::sum));
}
函数式编程和传统面向对象编程根本上有什么不同?
传统面向对象编程传递的是数据。函数式编程通过方法传递的是一种行为,行为指导了函数的处理,根据行为对数据进行加工。
举例:流转换成list的练习
public static void main(String[] args) {
Stream<String> stream = Stream.of("hello", "world", "hello world");
// String[] stringArray = stream.toArray(length -> new String[length]);
//替换成方法引用的方式 --> 构造方法引用.
String[] stringArray = stream.toArray(String[]::new);
Arrays.asList(stringArray).forEach(System.out::println);
System.out.println("- - - - - - - - - - -");
//将流转换成list, 有现成的封装好的方法
Stream<String> stream1 = Stream.of("hello", "world", "hello world");
List<String> collect = stream1.collect(Collectors.toList());// 本身是一个终止操作
collect.forEach(System.out::println);
System.out.println("- - - - - - ");
//使用原生的 collect 来将流转成List
Stream<String> stream2 = Stream.of("hello", "world", "hello world");
// List<String> lis = stream2.collect(() -> new ArrayList(), (theList, item) -> theList.add(item),
// (theList1, theList2) -> theList1.addAll(theList2));
// 将上面的转换成方法引用的方式 -- 这种方法不好理解.
List<String> list = stream2.collect(LinkedList::new, LinkedList::add, LinkedList::addAll);
//这种方法,如果想要返回ArrayList也可以实现.
// List<String> list1 = stream2.collect(ArrayList::new, ArrayList::add, ArrayList::addAll);
list.forEach(System.out::println);
}
Collectors类中包含了流转换的多个辅助类
举例: 将流 转成各种类型的数据。
public static void main(String[] args) {
Stream<String> stream = Stream.of("hello", "world", "hello world");
//将流转换成List 另一种方法
// List<String> list= stream.collect(Collectors.toCollection(ArrayList::new));
// list.forEach(System.out::println);
//将流转成set
// Set<String> set = stream.collect(Collectors.toSet());
//转成TreeSet
// TreeSet<String> set = stream.collect(Collectors.toCollection(TreeSet::new));
// set.forEach(System.out::println);
//转成字符串
String string = stream.collect(Collectors.joining());
System.out.println(string);
//Collectors 类中有多重辅助的方法.
}
遇到问题的时候,先思考一下能否用方法引用的方式,使用流的方式来操作。因为用起来比较简单。
举例:将集合中的每一个元素 转换成大写的字母, 给输出来。
public static void main(String[] args) {
//将集合中的每一个元素 转换成大写的字母, 给输出来
List<String> list = Arrays.asList("hello","world","hello world");
//转成字符串,然后转成大写.
// System.out.println(list.stream().collect(Collectors.joining()).toUpperCase());
//上面的代码 可以转换成下边的代码.
// System.out.println(String.join("", list).toUpperCase());
//视频上给出的 还是List的大写
list.stream().map(String::toUpperCase).collect(Collectors.toList()).forEach(System.out::println);
//将集合 的数据给平方一下输出.
List<Integer> list1 = Arrays.asList(1, 2, 3, 4, 5);
list1.stream().map(item -> item * item).collect(Collectors.toList()).forEach(System.out::println);
}
流中的 .map () 方法,是对集合中的每一个数据进行一下操作。
stream 的 flat操作。 打平操作。
public static void main(String[] args) {
// 举例: flag 的操作, 打平. 一个集合中有三个数组, 打平之后,三个数组的元素依次排列.
Stream<List<Integer>> stream = Stream.of(Arrays.asList(1), Arrays.asList(2, 3), Arrays.asList(4, 5));
//将里边每一个ArrayList的数据 做一个平方. 然后打平. 输出一个list
stream.flatMap(theList -> theList.stream()).map(item -> item * item).forEach(System.out::println);
}
Stream 其他方法介绍:
public static void main(String[] args) {
// stream 其他方法介绍.
// generate(). 生成stream对象
Stream<String> stream = Stream.generate(UUID.randomUUID()::toString);
// System.out.println(stream.findFirst().get());
// findFirst,找到第一个对象.然后就短路了,会返回一个Optional对象(为了避免NPE),不符合函数式编程
// stream.findFirst().isPresent(System.out::print);
// iterate() 会生成 一个 无限的串行流.
// 一般不会单独使用. 会使用limit 来限制一下总长度.
Stream.iterate(1, item -> item + 2).limit(6).forEach(System.out::println);
}
Stream 运算练习:(Stream提供了各种操作符)
举例:找出该流中大于2的元素,然后每个元素*2 ,然后忽略掉流中的前两个元素,然后再取流中的前两个元素,最后求出流元素中的总和.
Stream<Integer> stream = Stream.iterate(1, item -> item + 2).limit(6);
//找出该流中大于2的元素,先使用filter()过滤.
//每个元素*2 使用mapToInt 避免重复拆箱.
//忽略掉流中的前两个元素; 使用 skip(2)
//再取流中的前两个元素; 使用limit(2)
//求出流元素中的总和. 使用sum()
System.out.println(stream.filter(item -> item>2).mapToInt(item -> item * 2).skip(2).limit(2).sum());
举例:找出该流中大于2的元素,然后每个元素*2 ,然后忽略掉流中的前两个元素,然后再取流中的前两个元素,最后找到最小的元素.
// .min() 返回的是IntOptional.
// System.out.println(stream.filter(item -> item>2).mapToInt(item -> item * 2).skip(2).limit(2).min());
//应该这样调用. 上边的可能会出NPE异常
stream.filter(item -> item>2).mapToInt(item -> item * 2).skip(2).limit(2).min().ifPresent(System.out::println);
举例:获取最大值,最小值,求和等各种操作。 .summaryStatistics();
在练习的过程中发现了一个问题。如果是这样连续打印两条对流操作之后的结果。会报流未关闭的异常。
注意事项:流被重复使用了,或者流被关闭了,就会出异常。
如何避免:使用方法链的方式来处理流。 具体出现的原因,后续进行详细的源码讲解。
举例 :中间操作(惰性求值) 和中止操作(及早求值)本质的区别
public static void main(String[] args) {
List<String> list = Arrays.asList("hello", "world", "hello world");
//首字母转大写
list.stream().map(item ->{
String s = item.substring(0, 1).toUpperCase() + item.substring(1);
System.out.println("test");
return s;
}).forEach(System.out::println);
//没有遇到中止操作时,是不会执行中间操作的.是延迟的
// 遇到.forEach() 中止操作时,才会执行中间操作的代码
}
举例:流使用顺序不同的区别
//程序不会停止
IntStream.iterate(0,i->(i+1)%2).distinct().limit(6).forEach(System.out::println);
//程序会停止
IntStream.iterate(0,i->(i+1)%2).limit(6).distinct().forEach(System.out::println);
Stream底层深入
和迭代器不同的是,Stream可以并行化操作,迭代器只能命令式地、串行化操作
当使用穿行方式去遍历时,每个item读完后再读下一个item
使用并行去遍历时,数据会被分成多个段,其中每一个都在不同的线程中处理,然后将结果一起输出。
Stream的并行操作依赖于Java7中引入的Fork/Join框架。
流(Stream)由3部分构成:
- 源(Source)
- 零个或多个中间操作(Transforming values)(操作的是谁?操作的是源)
- 终止操作(Operations)(得到一个结果)
内部迭代和外部迭代
描述性的语言:sql和Stream的对比
select name from student where age > 20 and address = 'beijing' order by desc;
===================================================================================
Student.stream().filter(student -> student.getAge >20 ).filter(student -> student.getAddress().equals("beijing")).sorted(..).forEach(student -> System.out.println(student.getName));
上述的描述,并没有明确的告诉底层具体要怎么做,只是发出了描述性的信息。这种流的方式就叫做内部迭代。针对于性能来说,流的操作肯定不会降低性能。
外边迭代举例: jdk8以前的用的方式。
List list = new ArrayList<>();
for(int i = 0 ;i <= students.size();i++){
Student student = students.get(i);
If(student.getAge() > 20 )
list.add(student);
}
Collections.sort(list.....)
list.forEach().....
Stream的出现和集合是密不可分的。
集合关注的是数据与数据存储本身,流关注的则是对数据的计算。
流与迭代器类似的一点是:流是无法重复使用或消费的。
如何区分中间操作和中止操作:
中间操作都会返回一个Stream对象,比如说返回Stream,Stream,Stream;
中止操作则不会返回Stream类型,可能不返回值,也可能返回其他类型的单个值。
并行流的基本使用
举例: 串行流和并行流的简单举例比较
public static void main(String[] args) {
// 串行流和并行流的比较
List<String> list = new ArrayList<>(5000000);
for (int i = 0; i < 5000000; i++) {
list.add(UUID.randomUUID().toString());
}
System.out.println("开始排序");
long startTime = System.nanoTime();
// list.parallelStream().sorted().count(); //串行流
list.parallelStream().sorted().count(); //并行流
long endTime = System.nanoTime();
long millis = TimeUnit.NANOSECONDS.toMillis(endTime - startTime);
System.out.println("排序时间为: "+ millis);
}
结果如图,并行流和串行流时间上错了4倍。
举例: 打印出列表中出来第一个长度为5的单词.. 同时将长度5打印出来.
public static void main(String[] args) {
List<String> list = Arrays.asList("hello", "world", "hello world");
// list.stream().mapToInt(item -> item.length()).filter(length -> length ==5)
// .findFirst().ifPresent(System.out::println);
list.stream().mapToInt(item -> {
int length = item.length();
System.out.println(item);
return length;
}).filter(length -> length == 5).findFirst().ifPresent(System.out::println);
//返回的是hello , 不包含 world.
}
返回的是hello , 不包含 world.
流的操作原理: 把流想成一个容器,里边存储的是对每一个元素的操作。操作时,把操作串行化。对同一个元素进行串行的操作。操作中还包含着短路操作。
举例: 找出 这个集合中所有的单词,而且要去重. flatMap()的使用。
public static void main(String[] args) {
//举例; 找出 这个集合中所有的单词,而且要去重.
List<String> list = Arrays.asList("hello welcome", "world hello", "hello world", "hello hello world");
// list.stream().map(item -> item.split(" ")).distinct()
// .collect(Collectors.toList()).forEach(System.out::println);
//使用map不能满足需求, 使用flatMap
list.stream().map(item -> item.split(" ")).flatMap(Arrays::stream)
.distinct().collect(Collectors.toList()).forEach(System.out::println);
//结果为 hello welcome world
}
举例:组合起来. 打印出 hi zhangsan , hi lisi , hi wangwu , hello zhangsan , hello lisi .... flatMap()的使用。
public static void main(String[] args) {
//组合起来. 打印出 hi zhangsan , hi lisi , hi wangwu , hello zhangsan , hello lisi ....
List<String> list = Arrays.asList("Hi", "Hello", "你好");
List<String> list1 = Arrays.asList("zhangsan", "lisi", "wangwu");
List<String> collect = list.stream().flatMap(item -> list1.stream().map(item2 -> item + " " +
item2)).collect(Collectors.toList());
collect.forEach(System.out::println);
}
举例: 流对分组/分区操作的支持. group by / protition by
public static void main(String[] args) {
//数据准备.
Student student1 = new Student("zhangsan", 100, 20);
Student student2 = new Student("lisi", 90, 20);
Student student3 = new Student("wangwu", 90, 30);
Student student4 = new Student("zhangsan", 80, 40);
List<Student> students = Arrays.asList(student1, student2, student3, student4);
//对学生按照姓名分组.
Map<String, List<Student>> listMap = students.stream().collect(Collectors.groupingBy(Student::getName));
System.out.println(listMap);
//对学生按照分数分组.
Map<Integer, List<Student>> collect = students.stream().collect(Collectors.groupingBy(Student::getScore));
System.out.println(collect);
//按照年龄分组.
Map<Integer, List<Student>> ageMap = students.stream().collect(Collectors.groupingBy(Student::getAge));
System.out.println(ageMap);
//按照名字分组后,获取到每个分组的元素的个数.
Map<String, Long> nameCount = students.stream().collect(Collectors.groupingBy(Student::getName, Collectors.counting()));
System.out.println(nameCount);
//按照名字分组,求得每个组的平均值.
Map<String, Double> doubleMap = students.stream().collect(Collectors.groupingBy(Student::getName, Collectors.averagingDouble(Student::getScore)));
System.out.println(doubleMap);
//分区, 分组的一种特例. 只能分两个组 true or flase . partitioning By
Map<Boolean, List<Student>> collect1 = students.stream().collect(Collectors.partitioningBy(student -> student.getScore() >= 90));
System.out.println(collect1);
}
Java8(3)Collector类源码分析
继续学习Java8 新特性。
Collector类源码分析2020了你还不会Java8新特性?
jdk8是怎么对底层完成支持的。不了解底层,平时用还可以,但是遇到问题的时候就会卡在那里。迟迟灭有解决方案。在学习一门新技术时,先学习怎么去用,不要执着于源码。但是随着用的越来越多,你去了解底层是比较好的一种学习方法。
有多种方法可以实现同一个功能.什么方式更好呢? 越具体的方法越好. 减少自动装箱拆箱操作
- collect : 收集器
- Collector作为collect方法的参数。
- Collector作为一个接口。它是一个可变的汇聚操作,将输入元素累计到一个可变的结果容器中;它会在所有元素都处理完毕后将累计的结果作为一个最终的表示(这是一个可选操作);它支持串行与并行两种方式执行。(并不是说并行一定比串行快。)
- Collects本身提供了关于Collectoe的常见汇聚实现,Collectors本身实际上是一个工厂。
- 为了确保串行和并行的结果一致,需要进行额外的处理。必须要满足两个约束。
identity 同一性
associativity 结合性 - 同一性:对于任何一条并行线路来说 ,需要满足a == combiner.apply(a, supplier.get())。举例来说:
(List list1,List list2 -> {list1.addAll(list2);return list1})
结合性: 下方有举例。
Collector收集器的实现源码详解
/**
* A <a href="package-summary.html#Reduction">mutable reduction operation</a> that
* accumulates input elements into a mutable result container, optionally transforming
* the accumulated result into a final representation after all input elements
* have been processed. Reduction operations can be performed either sequentially
* or in parallel.
Collector作为一个接口。它是一个可变的汇聚操作,将输入元素累计到一个可变的结果容器中;它会在所有元素都处理 完毕后将累计的结果作为一个最终的表示(这是一个可选操作);它支持串行与并行两种方式执行。(并不是说并行一定比串行快。)
* <p>Examples of mutable reduction operations include:
* accumulating elements into a {@code Collection}; concatenating
* strings using a {@code StringBuilder}; computing summary information about
* elements such as sum, min, max, or average; computing "pivot table" summaries
* such as "maximum valued transaction by seller", etc. The class {@link Collectors}
* provides implementations of many common mutable reductions.
Collects本身提供了关于Collectoe的常见汇聚实现,Collectors本身实际上是一个工厂。
* <p>A {@code Collector} is specified by four functions that work together to
* accumulate entries into a mutable result container, and optionally perform
* a final transform on the result. They are: <ul>
* <li>creation of a new result container ({@link #supplier()})</li>
* <li>incorporating a new data element into a result container ({@link #accumulator()})</li>
* <li>combining two result containers into one ({@link #combiner()})</li>
* <li>performing an optional final transform on the container ({@link #finisher()})</li>
* </ul>
Collector 包含了4个参数
* <p>Collectors also have a set of characteristics, such as
* {@link Characteristics#CONCURRENT}, that provide hints that can be used by a
* reduction implementation to provide better performance.
*
* <p>A sequential implementation of a reduction using a collector would
* create a single result container using the supplier function, and invoke the
* accumulator function once for each input element. A parallel implementation
* would partition the input, create a result container for each partition,
* accumulate the contents of each partition into a subresult for that partition,
* and then use the combiner function to merge the subresults into a combined
* result.
举例说明:
1,2, 3, 4 四个部分结果。
1,2 -》 5
5,3 -》 6
6,4 -》 6
### 同一性和结合性的解析:
* <p>To ensure that sequential and parallel executions produce equivalent
* results, the collector functions must satisfy an <em>identity</em> and an
* <a href="package-summary.html#Associativity">associativity</a> constraints.
为了确保串行和并行的结果一致,需要进行额外的处理。必须要满足两个约束。
identity 同一性
associativity 结合性
* <p>The identity constraint says that for any partially accumulated result,
* combining it with an empty result container must produce an equivalent
* result. That is, for a partially accumulated result {@code a} that is the
* result of any series of accumulator and combiner invocations, {@code a} must
* be equivalent to {@code combiner.apply(a, supplier.get())}.
同一性: 对于任何一条并行线路来说,需要满足a == combiner.apply(a, supplier.get())
* <p>The associativity constraint says that splitting the computation must
* produce an equivalent result. That is, for any input elements {@code t1}
* and {@code t2}, the results {@code r1} and {@code r2} in the computation
* below must be equivalent:
* <pre>{@code
* A a1 = supplier.get(); 串行:
* accumulator.accept(a1, t1); 第一个参数,每次累加的中间结果。 第二个参数,下一个要处理的参数
* accumulator.accept(a1, t2);
* R r1 = finisher.apply(a1); // result without splitting
*
* A a2 = supplier.get(); 并行:
* accumulator.accept(a2, t1); 第一个参数,每次累加的中间结果。 第二个参数,下一个要处理的参数
* A a3 = supplier.get();
* accumulator.accept(a3, t2);
* R r2 = finisher.apply(combiner.apply(a2, a3)); // result with splitting
* } </pre>
结合性: 如上例。 最终要求 r1 == r2
* <p>For collectors that do not have the {@code UNORDERED} characteristic,
* two accumulated results {@code a1} and {@code a2} are equivalent if
* {@code finisher.apply(a1).equals(finisher.apply(a2))}. For unordered
* collectors, equivalence is relaxed to allow for non-equality related to
* differences in order. (For example, an unordered collector that accumulated
* elements to a {@code List} would consider two lists equivalent if they
* contained the same elements, ignoring order.)
对于无序的收集器来说,等价性就被放松了,会考虑到顺序上的区别对应的不相等性。
两个集合中包含了相同的元素,但是忽略了顺序。这种情况下两个的集合也是等价的。
### collector复合与注意事项:
* <p>Libraries that implement reduction (汇聚) based on {@code Collector}, such as
* {@link Stream#collect(Collector)}, must adhere to the following constraints:
* <ul>
* <li>The first argument passed to the accumulator function, both
* arguments passed to the combiner function, and the argument passed to the
* finisher function must be the result of a previous invocation of the
* result supplier, accumulator, or combiner functions.</li>
* <li>The implementation should not do anything with the result of any of
* the result supplier, accumulator, or combiner functions other than to
* pass them again to the accumulator, combiner, or finisher functions,
* or return them to the caller of the reduction operation.</li>
具体的实现来说,不应该对中间返回的结果进行额外的操作。除了最终的返回的结果。
* <li>If a result is passed to the combiner or finisher
* function, and the same object is not returned from that function, it is
* never used again.</li>
如果一个结果被传递给combiner or finisher,但是并没有返回一个你传递的对象,说明你生成了一个新的结果或者创建了新的对象。这个结果就不会再被使用了。
* <li>Once a result is passed to the combiner or finisher function, it
* is never passed to the accumulator function again.</li>
一旦一个结果被传递给了 combiner or finisher 函数,他就不会再被传递给了accumulator函数了。
* <li>For non-concurrent collectors, any result returned from the result
* supplier, accumulator, or combiner functions must be serially
* thread-confined. This enables collection to occur in parallel without
* the {@code Collector} needing to implement any additional synchronization.
* The reduction implementation must manage that the input is properly
* partitioned, that partitions are processed in isolation, and combining
* happens only after accumulation is complete.</li>
线程和线程之间的处理都是独立的,最终结束时再进行合并。
* <li>For concurrent collectors, an implementation is free to (but not
* required to) implement reduction concurrently. A concurrent reduction
* is one where the accumulator function is called concurrently from
* multiple threads, using the same concurrently-modifiable result container,
* rather than keeping the result isolated during accumulation.
* A concurrent reduction should only be applied if the collector has the
* {@link Characteristics#UNORDERED} characteristics or if the
* originating data is unordered.</li>
如果不是并发收集器,4个线程会生成4个中间结果。
是并发收集器的话,4个线程会同时调用一个结果容器。
* </ul>
*
* <p>In addition to the predefined implementations in {@link Collectors}, the
* static factory methods {@link #of(Supplier, BiConsumer, BinaryOperator, Characteristics...)}
* can be used to construct collectors. For example, you could create a collector
* that accumulates widgets into a {@code TreeSet} with:
*
* <pre>{@code
* Collector<Widget, ?, TreeSet<Widget>> intoSet =
* Collector.of(TreeSet::new, TreeSet::add,
* (left, right) -> { left.addAll(right); return left; });
* }</pre>
通过Collector.of(传进一个新的要操作的元素,结果容器处理的步骤,多线程处理的操作)
将流中的每个Widget 添加到TreeSet中
* (This behavior is also implemented by the predefined collector
* {@link Collectors#toCollection(Supplier)}).
*
* @apiNote
* Performing a reduction operation with a {@code Collector} should produce a
* result equivalent to:
* <pre>{@code
* R container = collector.supplier().get();
* for (T t : data)
* collector.accumulator().accept(container, t);
* return collector.finisher().apply(container);
* }</pre>
api的说明: collector的finisher汇聚的实现过程。
* <p>However, the library is free to partition the input, perform the reduction
* on the partitions, and then use the combiner function to combine the partial
* results to achieve a parallel reduction. (Depending on the specific reduction
* operation, this may perform better or worse, depending on the relative cost
* of the accumulator and combiner functions.)
性能取决于accumulator and combiner的代价。 也就是说 并行流 并不一定比串行流效率高。
* <p>Collectors are designed to be <em>composed</em>; many of the methods
* in {@link Collectors} are functions that take a collector and produce
* a new collector. For example, given the following collector that computes
* the sum of the salaries of a stream of employees:
* <pre>{@code
* Collector<Employee, ?, Integer> summingSalaries
* = Collectors.summingInt(Employee::getSalary))
* }</pre>
搜集器是可以组合的: take a collector and produce a new collector.
搜集器的实现过程。 如 员工的工资的求和。
* If we wanted to create a collector to tabulate the sum of salaries by
* department, we could reuse the "sum of salaries" logic using
* {@link Collectors#groupingBy(Function, Collector)}:
* <pre>{@code
* Collector<Employee, ?, Map<Department, Integer>> summingSalariesByDept
* = Collectors.groupingBy(Employee::getDepartment, summingSalaries);
* }</pre>
如果我们想要新建一个搜集器,我们可以复用之前的搜集器。
实现过程。
* @see Stream#collect(Collector)
* @see Collectors
*
* @param <T> the type of input elements to the reduction operation
<T> 代表 流中的每一个元素的类型。
* @param <A> the mutable accumulation type of the reduction operation (often
* hidden as an implementation detail)
<A> 代表 reduction操作的可变容器的类型。表示中间操作生成的结果的类型(如ArrayList)。
* @param <R> the result type of the reduction operation
<R> 代表 结果类型
* @since 1.8
*/
public interface Collector<T, A, R>{
/**
* A function that creates and returns a new mutable result container.
* A就代表每一次返回结果的类型
* @return a function which returns a new, mutable result container
*/
Supplier<A> supplier(); // 提供一个结果容器
/**
* A function that folds a value into a mutable result container.
* A代表中间操作返回结果的类型。 T是下一个代操作的元素的类型。
* @return a function which folds a value into a mutable result container
*/
BiConsumer<A, T> accumulator(); //不断的向结果容器中添加元素。
/**
* A function that accepts two partial results and merges them. The
* combiner function may fold state from one argument into the other and
* return that, or may return a new result container.
* A 中间操作返回结果的类型。
* @return a function which combines two partial results into a combined
* result
*/
BinaryOperator<A> combiner(); //在多线程中 合并 部分结果。
/**
和并行流紧密相关的
接收两个结果,将两个部分结果合并到一起。
combiner函数,有4个线程同时去执行,那么就会有生成4个部分结果。
举例说明:
1,2, 3, 4 四个部分结果。
1,2 -》 5
5,3 -》 6
6,4 -》 6
1,2合并返回5 属于return a new result container.
6,4合并返回6,属于The combiner function may fold state from one argument into the other and return that。
*/
/**
* Perform the final transformation from the intermediate accumulation type
* {@code A} to the final result type {@code R}.
*R 是最终返回结果的类型。
* <p>If the characteristic {@code IDENTITY_TRANSFORM} is
* set, this function may be presumed to be an identity transform with an
* unchecked cast from {@code A} to {@code R}.
*
* @return a function which transforms the intermediate result to the final
* result
*/
Function<A, R> finisher(); // 合并中间的值,给出返回值。
/**
* Returns a {@code Set} of {@code Collector.Characteristics} indicating
* the characteristics of this Collector. This set should be immutable.
*
* @return an immutable set of collector characteristics
*/
Set<Characteristics> characteristics(); //特征的集合
/**
* Returns a new {@code Collector} described by the given {@code supplier},
* {@code accumulator}, and {@code combiner} functions. The resulting
* {@code Collector} has the {@code Collector.Characteristics.IDENTITY_FINISH}
* characteristic.
*
* @param supplier The supplier function for the new collector
* @param accumulator The accumulator function for the new collector
* @param combiner The combiner function for the new collector
* @param characteristics The collector characteristics for the new
* collector
* @param <T> The type of input elements for the new collector
* @param <R> The type of intermediate accumulation result, and final result,
* for the new collector
* @throws NullPointerException if any argument is null
* @return the new {@code Collector}
*/
public static<T, R> Collector<T, R, R> of(Supplier<R> supplier,
BiConsumer<R, T> accumulator,
BinaryOperator<R> combiner,
Characteristics... characteristics) {
Objects.requireNonNull(supplier);
Objects.requireNonNull(accumulator);
Objects.requireNonNull(combiner);
Objects.requireNonNull(characteristics);
Set<Characteristics> cs = (characteristics.length == 0)
? Collectors.CH_ID
: Collections.unmodifiableSet(EnumSet.of(Collector.Characteristics.IDENTITY_FINISH,
characteristics));
return new Collectors.CollectorImpl<>(supplier, accumulator, combiner, cs);
}
/**
* Returns a new {@code Collector} described by the given {@code supplier},
* {@code accumulator}, {@code combiner}, and {@code finisher} functions.
*
* @param supplier The supplier function for the new collector
* @param accumulator The accumulator function for the new collector
* @param combiner The combiner function for the new collector
* @param finisher The finisher function for the new collector
* @param characteristics The collector characteristics for the new
* collector
* @param <T> The type of input elements for the new collector
* @param <A> The intermediate accumulation type of the new collector
* @param <R> The final result type of the new collector
* @throws NullPointerException if any argument is null
* @return the new {@code Collector}
*/
public static<T, A, R> Collector<T, A, R> of(Supplier<A> supplier,
BiConsumer<A, T> accumulator,
BinaryOperator<A> combiner,
Function<A, R> finisher,
Characteristics... characteristics) {
Objects.requireNonNull(supplier);
Objects.requireNonNull(accumulator);
Objects.requireNonNull(combiner);
Objects.requireNonNull(finisher);
Objects.requireNonNull(characteristics);
Set<Characteristics> cs = Collectors.CH_NOID;
if (characteristics.length > 0) {
cs = EnumSet.noneOf(Characteristics.class);
Collections.addAll(cs, characteristics);
cs = Collections.unmodifiableSet(cs);
}
return new Collectors.CollectorImpl<>(supplier, accumulator, combiner, finisher, cs);
}
/**
* Characteristics indicating properties of a {@code Collector}, which can
* be used to optimize reduction implementations.
*/
enum Characteristics { // 特征
/**
* Indicates that this collector is <em>concurrent</em>, meaning that
* the result container can support the accumulator function being
* called concurrently with the same result container from multiple
* threads.
* 并发的,同一个结果容器可以由多个线程同时调用。
* <p>If a {@code CONCURRENT} collector is not also {@code UNORDERED},
* then it should only be evaluated concurrently if applied to an
* unordered data source.
如果不是UNORDERED。只能用于无序的数据源。
如果不加CONCURRENT,还是可以操作并行流。但是操作的不是一个结果容器,而是多个结果容器。则需要调用finisher.
如果加了CONCURRENT,则是多个线程操作同一结果容器。 则无需调用finisher.
*/
CONCURRENT,
/**
* Indicates that the collection operation does not commit to preserving
* the encounter order of input elements. (This might be true if the
* result container has no intrinsic order, such as a {@link Set}.)
收集操作并不保留顺序。
*/
UNORDERED,
/**
* Indicates that the finisher function is the identity function and
* can be elided. If set, it must be the case that an unchecked cast
* from A to R will succeed.
如果用和这个参数,表示 Finish函数就是 identity函数。 并且转换一定要是成功的。
*/
IDENTITY_FINISH
}
}
Java8(4)(五)收集器比较器用法详解及源码剖析
收集器用法详解与多级分组和分区
为什么在collectors类中定义一个静态内部类?
static class CollectorImpl<T, A, R> implements Collector<T, A, R>
设计上,本身就是一个辅助类,是一个工厂。作用是给开发者提供常见的收集器实现。提供的方法都是静态方法,可以直接调用。
函数式编程最大的特点:表示做什么,而不是如何做。开发者更注重如做什么,底层实现如何做。
/**
* Implementations of {@link Collector} that implement various useful reduction
* operations, such as accumulating elements into collections, summarizing
* elements according to various criteria, etc.
没有实现的方法,可以自己去编写收集器。
* <p>The following are examples of using the predefined collectors to perform
* common mutable reduction tasks:
* 举例:
* <pre>{@code
* // Accumulate names into a List 名字加入到一个集合。
* List<String> list = people.stream().map(Person::getName).collect(Collectors.toList());
*
* // Accumulate names into a TreeSet 名字加入到一个Set。 待排序的集合。
* Set<String> set = people.stream().map(Person::getName).collect(Collectors.toCollection(TreeSet::new));
*
* // Convert elements to strings and concatenate them, separated by commas
* String joined = things.stream()
* .map(Object::toString)
* .collect(Collectors.joining(", "));
*
* // Compute sum of salaries of employee 计算员工工资的总数。
* int total = employees.stream()
* .collect(Collectors.summingInt(Employee::getSalary)));
*
* // Group employees by department 对员工进行分组。
* Map<Department, List<Employee>> byDept
* = employees.stream()
* .collect(Collectors.groupingBy(Employee::getDepartment));
*
* // Compute sum of salaries by department 根据部门计算工资的总数。
* Map<Department, Integer> totalByDept
* = employees.stream()
* .collect(Collectors.groupingBy(Employee::getDepartment,
* Collectors.summingInt(Employee::getSalary)));
*
* // Partition students into passing and failing 将学生进行分区。
* Map<Boolean, List<Student>> passingFailing =
* students.stream()
* .collect(Collectors.partitioningBy(s -> s.getGrade() >= PASS_THRESHOLD));
*
* }</pre>
*
* @since 1.8 提供了常见的方法。没有的话可以去自定义。
*/
public final class Collectors {
举例。collector中的方法应用:
public static void main(String[] args) {
Student student1 = new Student("zhangsan", 80);
Student student2 = new Student("lisi", 90);
Student student3 = new Student("wangwu", 100);
Student student4 = new Student("zhaoliu", 90);
Student student5 = new Student("zhaoliu", 90);
List<Student> students = Arrays.asList(student1, student2, student3, student4, student5);
//list 转换成一个流,再转换成一个集合.
List<Student> students1 = students.stream().collect(Collectors.toList());
students1.forEach(System.out::println);
System.out.println("- - - - - - -");
// collect 方法底层原理介绍.
//有多种方法可以实现同一个功能.什么方式更好呢? 越具体的方法越好. 减少自动装箱拆箱操作.
System.out.println("count:" + students.stream().collect(Collectors.counting()));
System.out.println("count:" + (Long) students.stream().count());
System.out.println("- - - - - - - -");
//举例练习
// 找出集合中分数最低的学生,打印出来.
students.stream().collect(minBy(Comparator.comparingInt(Student::getScore))).ifPresent(System.out::println);
// 找出集合中分数最大成绩
students.stream().collect(maxBy(Comparator.comparingInt(Student::getScore))).ifPresent(System.out::println);
// 求平均值
System.out.println(students.stream().collect(averagingInt(Student::getScore)));
// 求分数的综合
System.out.println(students.stream().collect(summingInt(Student::getScore)));
// 求各种汇总信息 结果为IntSummaryStatistics{count=5, sum=450, min=80, average=90.000000, max=100}
System.out.println(students.stream().collect(summarizingInt(Student::getScore)));
System.out.println(" - - - - - ");
// 字符串的拼接 结果为:zhangsanlisiwangwuzhaoliuzhaoliu
System.out.println(students.stream().map(Student::getName).collect(joining()));
//拼接加分隔符 结果为:zhangsan,lisi,wangwu,zhaoliu,zhaoliu
System.out.println(students.stream().map(Student::getName).collect(joining(",")));
// 拼接加前后缀 结果为:hello zhangsan,lisi,wangwu,zhaoliu,zhaoliu world
System.out.println(students.stream().map(Student::getName).collect(joining(",", "hello ", " world")));
System.out.println("- - - - - - ");
// group by 多层分组
// 根据分数和名字进行分组 输出结果为:
// {80={zhangsan=[Student{name='zhangsan', score=80}]},
// 100={wangwu=[Student{name='wangwu', score=100}]},
// 90={lisi=[Student{name='lisi', score=90}], zhaoliu=[Student{name='zhaoliu', score=90}, Student{name='zhaoliu', score=90}]}}
Map<Integer, Map<String, List<Student>>> collect = students.stream().collect(groupingBy(Student::getScore, groupingBy(Student::getName)));
System.out.println(collect);
System.out.println("- - - - - - - ");
// partitioningBy 多级分区 输出结果为:{false=[Student{name='zhangsan', score=80}], true=[Student{name='lisi', score=90}, Student{name='wangwu', score=100}, Student{name='zhaoliu', score=90}, Student{name='zhaoliu', score=90}]}
Map<Boolean, List<Student>> collect1 = students.stream().collect(partitioningBy(student -> student.getScore() > 80));
System.out.println(collect1);
// 按照大于80分区,再按照90分区
//输出结果为:{false={false=[Student{name='zhangsan', score=80}], true=[]}, true={false=[Student{name='lisi', score=90}, Student{name='zhaoliu', score=90}, Student{name='zhaoliu', score=90}], true=[Student{name='wangwu', score=100}]}}
Map<Boolean, Map<Boolean, List<Student>>> collect2 = students.stream().collect(partitioningBy(student -> student.getScore() > 80, partitioningBy(student -> student.getScore() > 90)));
System.out.println(collect2);
//分区, 然后求出每个分组中的个数. 结果为:{false=1, true=4}
Map<Boolean, Long> collect3 = students.stream().collect(partitioningBy(student -> student.getScore() > 80, counting()));
System.out.println(collect3);
System.out.println("- - - - - - - ");
//根据名字分组,得到学生的分数 --, 使用collectingAndThen 求最小值,然后整合起来. 最后Optional.get()一定有值.
students.stream().collect(groupingBy(Student::getName,collectingAndThen(minBy(Comparator.comparingInt(Student::getScore)), Optional::get)));
}
Comparator比较器详解与类型推断特例
Comparator 比较器。引用了多个default方法。
完成一个功能时有多个方法,使用特化的方法。因为效率会更高。减少了装箱拆箱的操作。减少性能损耗。
举例: 简单功能实现
public static void main(String[] args) {
List<String> list = Arrays.asList("nihao", "hello", "world", "welcome");
//对list按照字母的升序排序
// list.stream().sorted().forEach(System.out::println);
//按照字符串的长度排序
// Collections.sort(list, (item1, item2) -> item1.length() - item2.length());
// Collections.sort(list, Comparator.comparingInt(String::length));
//字符串的降序排序
// list.sort(Comparator.comparingInt(String::length).reversed());
// 下边的形式会报错 item识别成了(Obejct).
//lambda表达式的类型推断. 如果无法推断类型,需要自己制定类型
// list.sort(Comparator.comparingInt(item-> item.length()).reversed());
//这样写就成功了.
list.sort(Comparator.comparingInt((String item )-> item.length()).reversed());
//为什么这个地方无法推断类型?
// 能推断出的 : list.stream().... Strean<T> 传递的有参数. 精确的类型可以进行类型推断.
//这个地方没有明确具体是什么类型.ToIntFunction<? super T> .可以是String 或者在往上的父类 这个地方看成了Object类了.
// list.sort(Comparator.comparingInt((Boolean item)-> 1).reversed());
//这种Boolean 就会报错.编译不通过.
System.out.println(list);
}
比较器深入举例练习
举例:两层的比较.先按照字符串的长度升序排序. 长度相同,根据每一个ASCII码的顺序排序、
thenComparing()多级排序的练习。;
List<String> list = Arrays.asList("nihao", "hello", "world", "welcome");
//两层的比较.先按照字符串的长度升序排序. 长度相同,根据每一个ASCII码的升序排序. (不区分大小写的 ,按照字母排序的规则) 几种实现的方法。
list.sort(Comparator.comparingInt(String::length).thenComparing(String.CASE_INSENSITIVE_ORDER));
list.sort(Comparator.comparingInt(String::length).thenComparing((item1,item2) -> item1.toUpperCase().compareTo(item2.toUpperCase())));
list.sort(Comparator.comparingInt(String::length).thenComparing(Comparator.comparing(String::toUpperCase)));
//排序后将顺序翻转过来. reverseOrder();
list.sort(Comparator.comparingInt(String::length).thenComparing(String::toLowerCase,Comparator.reverseOrder()));
// 按照字符串的长度降序排序, 再根据ASCII的降序排序
list.sort(Comparator.comparingInt(String::length).reversed()
.thenComparing(String::toLowerCase,Comparator.reverseOrder()));
//多级排序
list.sort(Comparator.comparingInt(String::length).reversed()
.thenComparing(String::toLowerCase, Comparator.reverseOrder())
.thenComparing(Comparator.reverseOrder()));
// 最后一个thenComparing()没有发生作用。
自定义一个简单的收集器
jdk提供了Collector接口。
public class MySetCollector<T> implements Collector<T,Set<T>,Set<T>> {
@Override
public Supplier<Set<T>> supplier() {
//用于提供一个空的容器
System.out.println("supplier invoked! ");
return HashSet::new; // 不接受对象,返回一个Set对象
}
@Override
public BiConsumer<Set<T>, T> accumulator() {
// 累加器类型. 接收两个参数不返回值.
//完成的功能: 不断的往set中添加元素
System.out.println("accumulator invoked! ");
return Set<T>::add ;
// return HashSet<T>::add ; 返回HashSet报错. 原因: 返回的是中间类型的返回类型. 不论返回什么类型的Set ,Set都符合要求.
}
@Override
public BinaryOperator<Set<T>> combiner() {
//将并行流的多个结果给合并起来.
System.out.println("combiner invoked! ");
return (set1,set2)->{
set1.addAll(set2);
return set1;
};
}
@Override
public Function<Set<T>, Set<T>> finisher() {
//完成器,把所有的结果都合并在一起. 返回一个最终的结果类型
//如果中间类型 和最终结果类型一致, 不执行此方法;
System.out.println("finisher invoked! ");
// return t -> t ;
return Function.identity(); // 总是返回参数.
}
@Override
public Set<Characteristics> characteristics() {
System.out.println("characterstics invoked! ");
return Collections.unmodifiableSet(EnumSet.of(Characteristics.IDENTITY_FINISH,Characteristics.UNORDERED)); // 这个地方 不给参数,IDENTITY_FINISH . 则会调用finisher()
}
public static void main(String[] args) {
List<String> list = Arrays.asList("hello", "world");
Set<String> collect = list.stream().collect(new MySetCollector<>());
System.out.println(collect);
}
输出结果为:
supplier invoked!
accumulator invoked!
combiner invoked!
characterstics invoked!
characterstics invoked!
[world, hello]
}
接下来跟源码,看一下程序的调用过程。
@Override
@SuppressWarnings("unchecked")
public final <R, A> R collect(Collector<? super P_OUT, A, R> collector) {
A container;
if (isParallel()
&& (collector.characteristics().contains(Collector.Characteristics.CONCURRENT))
&& (!isOrdered() || collector.characteristics().contains(Collector.Characteristics.UNORDERED))) {
container = collector.supplier().get();
BiConsumer<A, ? super P_OUT> accumulator = collector.accumulator();
forEach(u -> accumulator.accept(container, u));
}
else {
container = evaluate(ReduceOps.makeRef(collector));
}
return collector.characteristics().contains(Collector.Characteristics.IDENTITY_FINISH)
? (R) container
: collector.finisher().apply(container);
}
自定义收集器的深度剖析与并行缺陷
// 举例: 需求:将一个Set,进行一个收集.对结果进行增强,封装在一个map当中. // 输入:Set<String>
// 输出:Map<String,String> // 示例输入: [hello,world,hello world]
// 示例输出: {[hello,hello],[world,world],[hello world,hello world]}
public class MySetCollector2<T> implements Collector<T, Set<T>, Map<T, T>> {
@Override
public Supplier<Set<T>> supplier() {
System.out.println("supplier invoked!");
return HashSet::new;
}
@Override
public BiConsumer<Set<T>, T> accumulator() {
System.out.println("accumlator invoked!");
return (set, item) -> {
set.add(item);
//每次调用 打印出线程 这里会打印6次,
System.out.println("accunlator : " +set+ ", "+ Thread.currentThread().getName());
//出现异常的原因在这里:
// 一个线程去修改一个集合,同时另外一个线程去迭代它(遍历它)。程序就会抛出并发修改异常。如果是并行操作的话,就不要在操作中额外的添加操作。添加就添加,别再去打印他。
};
}
@Override
public BinaryOperator<Set<T>> combiner() {
System.out.println("combiner invoked!");
//并行流的时候才会被调用. 将并行流的多个结果给合并起来
return (set1, set2) -> {
set1.addAll(set2);
return set2;
};
}
@Override
public Function<Set<T>, Map<T, T>> finisher() {
System.out.println("finisher invoked!");
// 中间类型和最终类型 一样,这个是不会被调用的.
//这里不一样 . 会进行调用
return set -> {
Map<T, T> map = new HashMap<>();
// Map<T, T> map = new TreeMap<>(); 直接返回一个排序的Map
set.forEach(item -> map.put(item,item));
return map;
};
}
@Override
public Set<Characteristics> characteristics() {
System.out.println(" characteristics invoked");
return Collections.unmodifiableSet(EnumSet.of(Characteristics.UNORDERED));// 这个参数不能乱写. 要理解每个枚举的具体意思.
// return Collections.unmodifiableSet(EnumSet.of(Characteristics.UNORDERED,Characteristics.CONCURRENT));// 这个参数不能乱写. 要理解每个枚举的具体意思.
//加了这个参数 Characteristics.CONCURRENT
// 会出异常, 会正常运行. Caused by: java.util.ConcurrentModificationException
// return Collections.unmodifiableSet(EnumSet.of(Characteristics.UNORDERED,Characteristics.IDENTITY_FINISH));
// 加了参数Characteristics.IDENTITY_FINISH . 会报错
// Process 'command '/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/bin/java'' finished with non-zero exit value 1
// IDENTITY_FINISH 实际的含义: 如果用和这个参数,表示 Finish函数就是 identity函数。 并且转换一定要是成功的。失败的话会抛异常.
// 这个收集器具有什么特性 ,由Characteristics 来定义. 就算你赋值的不实际,他也照样执行.
}
public static void main(String[] args) {
List<String> list = Arrays.asList("hello","hello", "world", "helloworld","1","4","j");
Set<String> set = new HashSet<>(list);
System.out.println("set"+set);
// Map<String, String> collect = set.stream().collect(new MySetCollector2<>());
Map<String, String> collect = set.parallelStream().collect(new MySetCollector2<>()); //并行流
System.out.println(collect);
}
}
并行流缺陷详解
并行:
accumlator invoked!
accunlator : [j], main
accunlator : [j, hello], main
accunlator : [helloworld, 4, j, hello], ForkJoinPool.commonPool-worker-2
accunlator : [helloworld, 1, 4, j, hello], ForkJoinPool.commonPool-worker-2
accunlator : [helloworld, 1, world, 4, j, hello], ForkJoinPool.commonPool-worker-2
串行。
accunlator : [j], main
accunlator : [helloworld], ForkJoinPool.commonPool-worker-11
accunlator : [helloworld, 1], ForkJoinPool.commonPool-worker-11
accunlator : [helloworld, 1, world], ForkJoinPool.commonPool-worker-11
accunlator : [4], ForkJoinPool.commonPool-worker-9
accunlator : [j, hello], main
/**
* Characteristics indicating properties of a {@code Collector}, which can
* be used to optimize reduction implementations.
*/
enum Characteristics { // 特征
/**
* Indicates that this collector is <em>concurrent</em>, meaning that
* the result container can support the accumulator function being
* called concurrently with the same result container from multiple
* threads.
* 并发的,同一个结果容器可以由多个线程同时调用。
* <p>If a {@code CONCURRENT} collector is not also {@code UNORDERED},
* then it should only be evaluated concurrently if applied to an
* unordered data source.
如果不是UNORDERED。只能用于无序的数据源。
如果不加CONCURRENT,还是可以操作并行流。但是操作的不是一个结果容器,而是多个结果容器。则需要调用finisher.
如果加了CONCURRENT,则是多个线程操作同一结果容器。 则无需调用finisher.
*/
CONCURRENT,
/**
* Indicates that the collection operation does not commit to preserving
* the encounter order of input elements. (This might be true if the
* result container has no intrinsic order, such as a {@link Set}.)
收集操作并不保留顺序。无序的。
*/
UNORDERED,
/**
* Indicates that the finisher function is the identity function and
* can be elided. If set, it must be the case that an unchecked cast
* from A to R will succeed.
如果用和这个参数,表示 Finish函数就是 identity函数。 并且转换一定要是成功的。不会调用Finish方法
*/
IDENTITY_FINISH
}
出异常的根本原因:
一个线程去修改一个集合,同时另外一个线程去迭代它(遍历它)。程序就会抛出并发修改异常。
如果是并行操作的话,就不要在操作中额外的添加操作。添加就添加,别再去打印他。
如果不加CONCURRENT,还是可以操作并行流。但是操作的不是一个结果容器,而是多个结果容器。则需要调用finisher.
如果加了CONCURRENT,则是多个线程操作同一结果容器。 则无需调用finisher.
超线程介绍:
超线程(HT, Hyper-Threading)是英特尔研发的一种技术,于2002年发布。超线程技术原先只应用于Xeon 处理器中,当时称为“Super-Threading”。之后陆续应用在Pentium 4 HT中。早期代号为Jackson。 [1]
通过此技术,英特尔实现在一个实体CPU中,提供两个逻辑线程。之后的[Pentium D](https://baike.baidu.com/item/Pentium D)纵使不支持超线程技术,但就集成了两个实体核心,所以仍会见到两个线程。超线程的未来发展,是提升处理器的逻辑线程。英特尔于2016年发布的Core i7-6950X便是将10核心的处理器,加上超线程技术,使之成为20个逻辑线程的产品
收集器总结:
Collectors类中方法的实现练习。收集器总是有中间的容器。有必要的总结一下收集器中的方法。
当你具备一些前提的东西之后,你再去看难的东西就会觉得理所当然的。
对于Collectors静态工厂类来说,实现一共分为两种情况:
通过CollectorImpl来实现。
通过reducing方法来实现;reducing方法本身又是通过CollectorImpl实现的。
总的来说,都是通过CollectorImpl来实现的。
1. toCollection(collectionFactory) 。 将集合转成指定的集合。
public static <T, C extends Collection<T>>
Collector<T, ?, C> toCollection(Supplier<C> collectionFactory) {
return new CollectorImpl<>(collectionFactory, Collection<T>::add,
(r1, r2) -> { r1.addAll(r2); return r1; },
CH_ID);
}
2. toList()是 toCollection()方法的一种具体实现。
public static <T>
Collector<T, ?, List<T>> toList() {
return new CollectorImpl<>((Supplier<List<T>>) ArrayList::new, List::add,
(left, right) -> { left.addAll(right); return left; },
CH_ID);
}
3. toSet() 是toCollection()方法的一种具体实现。
public static <T>
Collector<T, ?, Set<T>> toSet() {
return new CollectorImpl<>((Supplier<Set<T>>) HashSet::new, Set::add,
(left, right) -> { left.addAll(right); return left; },
CH_UNORDERED_ID);
}
4. joining(); 融合成一个字符串。还有两个重载的,单参数的和多参数的
public static Collector<CharSequence, ?, String> joining() {
return new CollectorImpl<CharSequence, StringBuilder, String>(
StringBuilder::new, StringBuilder::append,
(r1, r2) -> { r1.append(r2); return r1; },
StringBuilder::toString, CH_NOID);
}
public static Collector<CharSequence, ?, String> joining(CharSequence delimiter) {
return joining(delimiter, "", "");
}
public static Collector<CharSequence, ?, String> joining(CharSequence delimiter,
CharSequence prefix,
CharSequence suffix) {
return new CollectorImpl<>(
() -> new StringJoiner(delimiter, prefix, suffix),
StringJoiner::add, StringJoiner::merge,
StringJoiner::toString, CH_NOID);
}
5.mapping(); 将收集器的A 映射成B
public static <T, U, A, R>
Collector<T, ?, R> mapping(Function<? super T, ? extends U> mapper,
Collector<? super U, A, R> downstream) {
BiConsumer<A, ? super U> downstreamAccumulator = downstream.accumulator();
return new CollectorImpl<>(downstream.supplier(),
(r, t) -> downstreamAccumulator.accept(r, mapper.apply(t)),
downstream.combiner(), downstream.finisher(),
downstream.characteristics());
}
such as :
Map<City, Set<String>> lastNamesByCity
= people.stream().collect(groupingBy(Person::getCity, mapping(Person::getLastName, toSet())));
6.collectingAndThen(); 收集处理转换完后, 再去进行一个转换。
public static<T,A,R,RR> Collector<T,A,RR> collectingAndThen(Collector<T,A,R> downstream,
Function<R,RR> finisher) {
Set<Collector.Characteristics> characteristics = downstream.characteristics();
if (characteristics.contains(Collector.Characteristics.IDENTITY_FINISH)) {
if (characteristics.size() == 1)
characteristics = Collectors.CH_NOID;
else {
characteristics = EnumSet.copyOf(characteristics);
characteristics.remove(Collector.Characteristics.IDENTITY_FINISH);
// 这个地方为什么要把IDENTITY_FINISH 去掉。
// 如果不去掉的话, 最终结果直接返回中间结果的类型
characteristics = Collections.unmodifiableSet(characteristics);
}
}
return new CollectorImpl<>(downstream.supplier(),
downstream.accumulator(),
downstream.combiner(),
downstream.finisher().andThen(finisher),
characteristics);
}
such as :
List<String> people
= people.stream().collect(collectingAndThen(toList(),Collections::unmodifiableList));
7. counting(); 计数。
public static <T> Collector<T, ?, Long>
counting() {
return reducing(0L, e -> 1L, Long::sum);
}
8. 最大值最小值
public static <T> Collector<T, ?, Optional<T>>
minBy(Comparator<? super T> comparator) {
return reducing(BinaryOperator.minBy(comparator));
}
public static <T> Collector<T, ?, Optional<T>>
maxBy(Comparator<? super T> comparator) {
return reducing(BinaryOperator.maxBy(comparator));
}
9. summingInt();求和。
public static <T> Collector<T, ?, Integer>
summingInt(ToIntFunction<? super T> mapper) {
return new CollectorImpl<>(
() -> new int[1], // 这个地方为什么不可以用一个0,来当做中间类型呢?数字本身是一个值类型的,不可变的,没法引用。数组本身是一个引用类型,可以进行传递。数组本身是一个容器。
(a, t) -> { a[0] += mapper.applyAsInt(t); },
(a, b) -> { a[0] += b[0]; return a; },
a -> a[0], CH_NOID);
}
public static <T> Collector<T, ?, Long>
summingLong(ToLongFunction<? super T> mapper) {
return new CollectorImpl<>(
() -> new long[1],
(a, t) -> { a[0] += mapper.applyAsLong(t); },
(a, b) -> { a[0] += b[0]; return a; },
a -> a[0], CH_NOID);
}
public static <T> Collector<T, ?, Double>
summingDouble(ToDoubleFunction<? super T> mapper) {
/*
* In the arrays allocated for the collect operation, index 0
* holds the high-order bits of the running sum, index 1 holds
* the low-order bits of the sum computed via compensated
* summation, and index 2 holds the simple sum used to compute
* the proper result if the stream contains infinite values of
* the same sign.
*/
return new CollectorImpl<>(
() -> new double[3],
(a, t) -> { sumWithCompensation(a, mapper.applyAsDouble(t));
a[2] += mapper.applyAsDouble(t);},
(a, b) -> { sumWithCompensation(a, b[0]);
a[2] += b[2];
return sumWithCompensation(a, b[1]); },
a -> computeFinalSum(a),
CH_NOID);
}
10. averagingInt(); 求平均值。
public static <T> Collector<T, ?, Double>
averagingInt(ToIntFunction<? super T> mapper) {
return new CollectorImpl<>(
() -> new long[2],
(a, t) -> { a[0] += mapper.applyAsInt(t); a[1]++; },
(a, b) -> { a[0] += b[0]; a[1] += b[1]; return a; },
a -> (a[1] == 0) ? 0.0d : (double) a[0] / a[1], CH_NOID);
}
public static <T> Collector<T, ?, Double>
averagingLong(ToLongFunction<? super T> mapper) {
return new CollectorImpl<>(
() -> new long[2],
(a, t) -> { a[0] += mapper.applyAsLong(t); a[1]++; },
(a, b) -> { a[0] += b[0]; a[1] += b[1]; return a; },
a -> (a[1] == 0) ? 0.0d : (double) a[0] / a[1], CH_NOID);
}
public static <T> Collector<T, ?, Double>
averagingDouble(ToDoubleFunction<? super T> mapper) {
/*
* In the arrays allocated for the collect operation, index 0
* holds the high-order bits of the running sum, index 1 holds
* the low-order bits of the sum computed via compensated
* summation, and index 2 holds the number of values seen.
*/
return new CollectorImpl<>(
() -> new double[4],
(a, t) -> { sumWithCompensation(a, mapper.applyAsDouble(t)); a[2]++; a[3]+= mapper.applyAsDouble(t);},
(a, b) -> { sumWithCompensation(a, b[0]); sumWithCompensation(a, b[1]); a[2] += b[2]; a[3] += b[3]; return a; },
a -> (a[2] == 0) ? 0.0d : (computeFinalSum(a) / a[2]),
CH_NOID);
}
11. reducing() ; 详解。
public static <T> Collector<T, ?, T>
reducing(T identity, BinaryOperator<T> op) {
return new CollectorImpl<>(
boxSupplier(identity),
(a, t) -> { a[0] = op.apply(a[0], t); },
(a, b) -> { a[0] = op.apply(a[0], b[0]); return a; },
a -> a[0],
CH_NOID);
}
12. groupingBy(); 分组方法详解。
public static <T, K> Collector<T, ?, Map<K, List<T>>> //使用者本身不注重中间类型怎么操作。
groupingBy(Function<? super T, ? extends K> classifier) {
return groupingBy(classifier, toList()); //调用两个参数的 groupingBy();
}
* @param <T> the type of the input elements //T; 接收的类型。
* @param <K> the type of the keys // K,分类器函数中间返回结果的类型。
* @param <A> the intermediate accumulation type of the downstream collector
* @param <D> the result type of the downstream reduction
*
public static <T, K, A, D>
Collector<T, ?, Map<K, D>> groupingBy(Function<? super T, ? extends K> classifier,
Collector<? super T, A, D> downstream) {
return groupingBy(classifier, HashMap::new, downstream); // 调用三参数的 groupingBy()
}
//功能最完全的groupingBy();
/**
* Returns a {@code Collector} implementing a cascaded "group by" operation
* on input elements of type {@code T}, grouping elements according to a
* classification function, and then performing a reduction operation on
* the values associated with a given key using the specified downstream
* {@code Collector}. The {@code Map} produced by the Collector is created
* with the supplied factory function.
*
* <p>The classification function maps elements to some key type {@code K}.
* The downstream collector operates on elements of type {@code T} and
* produces a result of type {@code D}. The resulting collector produces a
* {@code Map<K, D>}.
*
* <p>For example, to compute the set of last names of people in each city,
* where the city names are sorted:
* <pre>{@code
* Map<City, Set<String>> namesByCity
* = people.stream().collect(groupingBy(Person::getCity, TreeMap::new,
* mapping(Person::getLastName, toSet())));
* }</pre>
*
* @implNote
* The returned {@code Collector} is not concurrent. For parallel stream
* pipelines, the {@code combiner} function operates by merging the keys
* from one map into another, which can be an expensive operation. If
* preservation of the order in which elements are presented to the downstream
* collector is not required, using {@link #groupingByConcurrent(Function, Supplier, Collector)}
* may offer better parallel performance.
* 返回的 并不是并发的。如果顺序并不是很重要的话, 推荐使用groupingByConcurrent(); 并发的分组函数。
* @param <T> the type of the input elements
* @param <K> the type of the keys
* @param <A> the intermediate accumulation type of the downstream collector
* @param <D> the result type of the downstream reduction
* @param <M> the type of the resulting {@code Map}
* @param classifier a classifier function mapping input elements to keys
* @param downstream a {@code Collector} implementing the downstream reduction
* @param mapFactory a function which, when called, produces a new empty
* {@code Map} of the desired type
* @return a {@code Collector} implementing the cascaded group-by operation
*
* @see #groupingBy(Function, Collector)
* @see #groupingBy(Function)
* @see #groupingByConcurrent(Function, Supplier, Collector)
*/
public static <T, K, D, A, M extends Map<K, D>>
Collector<T, ?, M> groupingBy(Function<? super T, ? extends K> classifier,
Supplier<M> mapFactory,
Collector<? super T, A, D> downstream) {
Supplier<A> downstreamSupplier = downstream.supplier();
BiConsumer<A, ? super T> downstreamAccumulator = downstream.accumulator();
BiConsumer<Map<K, A>, T> accumulator = (m, t) -> {
K key = Objects.requireNonNull(classifier.apply(t), "element cannot be mapped to a null key");
A container = m.computeIfAbsent(key, k -> downstreamSupplier.get());
downstreamAccumulator.accept(container, t);
};
BinaryOperator<Map<K, A>> merger = Collectors.<K, A, Map<K, A>>mapMerger(downstream.combiner()); //接收两个参数,参会一个结果。
@SuppressWarnings("unchecked")
Supplier<Map<K, A>> mangledFactory = (Supplier<Map<K, A>>) mapFactory; // 进行一个强制的类型转换。
if (downstream.characteristics().contains(Collector.Characteristics.IDENTITY_FINISH)) {
//如果 IDENTITY_FINISH , 则不用调用finisher方法。
return new CollectorImpl<>(mangledFactory, accumulator, merger, CH_ID);
}
else {
@SuppressWarnings("unchecked")
Function<A, A> downstreamFinisher = (Function<A, A>) downstream.finisher();
Function<Map<K, A>, M> finisher = intermediate -> {
intermediate.replaceAll((k, v) -> downstreamFinisher.apply(v));
@SuppressWarnings("unchecked")
M castResult = (M) intermediate;
return castResult;
};
return new CollectorImpl<>(mangledFactory, accumulator, merger, finisher, CH_NOID);
}
}
13. groupingByConcurrent(); 并发的分组方法。 使用前提是对数据里边的顺序没有要求。
/**
* Returns a concurrent {@code Collector} implementing a cascaded "group by"
* operation on input elements of type {@code T}, grouping elements
* according to a classification function, and then performing a reduction
* operation on the values associated with a given key using the specified
* downstream {@code Collector}.
*/ // ConcurrentHashMap 是一个支持并发的Map
public static <T, K>
Collector<T, ?, ConcurrentMap<K, List<T>>>
groupingByConcurrent(Function<? super T, ? extends K> classifier) {
return groupingByConcurrent(classifier, ConcurrentHashMap::new, toList());
}
public static <T, K, A, D>
Collector<T, ?, ConcurrentMap<K, D>> groupingByConcurrent(Function<? super T, ? extends K> classifier,
Collector<? super T, A, D> downstream) {
return groupingByConcurrent(classifier, ConcurrentHashMap::new, downstream);
}
public static <T, K, A, D, M extends ConcurrentMap<K, D>>
Collector<T, ?, M> groupingByConcurrent(Function<? super T, ? extends K> classifier,
Supplier<M> mapFactory,
Collector<? super T, A, D> downstream) {
Supplier<A> downstreamSupplier = downstream.supplier();
BiConsumer<A, ? super T> downstreamAccumulator = downstream.accumulator();
BinaryOperator<ConcurrentMap<K, A>> merger = Collectors.<K, A, ConcurrentMap<K, A>>mapMerger(downstream.combiner());
@SuppressWarnings("unchecked")
Supplier<ConcurrentMap<K, A>> mangledFactory = (Supplier<ConcurrentMap<K, A>>) mapFactory;
BiConsumer<ConcurrentMap<K, A>, T> accumulator;
if (downstream.characteristics().contains(Collector.Characteristics.CONCURRENT)) {
accumulator = (m, t) -> {
K key = Objects.requireNonNull(classifier.apply(t), "element cannot be mapped to a null key");
A resultContainer = m.computeIfAbsent(key, k -> downstreamSupplier.get());
downstreamAccumulator.accept(resultContainer, t);
};
}
else {
accumulator = (m, t) -> {
K key = Objects.requireNonNull(classifier.apply(t), "element cannot be mapped to a null key");
A resultContainer = m.computeIfAbsent(key, k -> downstreamSupplier.get());
synchronized (resultContainer) { // 这里有一个同步的操作。虽然是多线程操作同一容器,但是同时还是只有一个线程操作,进行了同步。
downstreamAccumulator.accept(resultContainer, t);
}
};
}
if (downstream.characteristics().contains(Collector.Characteristics.IDENTITY_FINISH)) {
return new CollectorImpl<>(mangledFactory, accumulator, merger, CH_CONCURRENT_ID);
}
else {
@SuppressWarnings("unchecked")
Function<A, A> downstreamFinisher = (Function<A, A>) downstream.finisher();
Function<ConcurrentMap<K, A>, M> finisher = intermediate -> {
intermediate.replaceAll((k, v) -> downstreamFinisher.apply(v));
@SuppressWarnings("unchecked")
M castResult = (M) intermediate;
return castResult;
};
return new CollectorImpl<>(mangledFactory, accumulator, merger, finisher, CH_CONCURRENT_NOID);
}
}
14. partitioningBy(); 分区详解。
public static <T>
Collector<T, ?, Map<Boolean, List<T>>> partitioningBy(Predicate<? super T> predicate) {
return partitioningBy(predicate, toList());
}
public static <T, D, A>
Collector<T, ?, Map<Boolean, D>> partitioningBy(Predicate<? super T> predicate,
Collector<? super T, A, D> downstream) {
BiConsumer<A, ? super T> downstreamAccumulator = downstream.accumulator();
BiConsumer<Partition<A>, T> accumulator = (result, t) ->
downstreamAccumulator.accept(predicate.test(t) ? result.forTrue : result.forFalse, t);
BinaryOperator<A> op = downstream.combiner();
BinaryOperator<Partition<A>> merger = (left, right) ->
new Partition<>(op.apply(left.forTrue, right.forTrue),
op.apply(left.forFalse, right.forFalse));
Supplier<Partition<A>> supplier = () ->
new Partition<>(downstream.supplier().get(),
downstream.supplier().get());
if (downstream.characteristics().contains(Collector.Characteristics.IDENTITY_FINISH)) {
return new CollectorImpl<>(supplier, accumulator, merger, CH_ID);
}
else {
Function<Partition<A>, Map<Boolean, D>> finisher = par ->
new Partition<>(downstream.finisher().apply(par.forTrue),
downstream.finisher().apply(par.forFalse));
return new CollectorImpl<>(supplier, accumulator, merger, finisher, CH_NOID);
}
}
jdk的代码,就是我们学习的范本。
讲这么细的原因并不是因为要自己去写,是为了了解内部是具体怎么实现的。调用的时候就信心非常的足。
附一个小插曲。
Java8(5)Stream流源码详解
节前小插曲
AutoCloseable接口: 通过一个例子 举例自动关闭流的实现。
public interface BaseStream<T, S extends BaseStream<T, S>>
extends AutoCloseable{} // BaseStream 继承了这个接口。 Stream继承了Stream
public class AutoCloseableTest implements AutoCloseable {
public void dosomething() {
System.out.println(" do something ");
}
@Override
public void close() throws Exception {
System.out.println(" close invoked ");
}
public static void main(String[] args) throws Exception {
try ( AutoCloseableTest autoCloseableTest = new AutoCloseableTest()){
autoCloseableTest.dosomething();
}
}
}
运行结果如下: 自动调用了关闭流的方法
Stream
/**
* A sequence of elements supporting sequential and parallel aggregate
* operations. The following example illustrates an aggregate operation using
* {@link Stream} and {@link IntStream}:
*
* <pre>{@code // 举例:
* int sum = widgets.stream()
* .filter(w -> w.getColor() == RED)
* .mapToInt(w -> w.getWeight())
* .sum();
* }</pre>
*
* In this example, {@code widgets} is a {@code Collection<Widget>}. We create
* a stream of {@code Widget} objects via {@link Collection#stream Collection.stream()},
* filter it to produce a stream containing only the red widgets, and then
* transform it into a stream of {@code int} values representing the weight of
* each red widget. Then this stream is summed to produce a total weight.
*
* <p>In addition to {@code Stream}, which is a stream of object references,
* there are primitive specializations for {@link IntStream}, {@link LongStream},
* and {@link DoubleStream}, all of which are referred to as "streams" and
* conform to the characteristics and restrictions described here.
jdk提供了平行的 特化的流。
*
* <p>To perform a computation, stream
* <a href="package-summary.html#StreamOps">operations</a> are composed into a
* <em>stream pipeline</em>. A stream pipeline consists of a source (which
* might be an array, a collection, a generator function, an I/O channel,
* etc), zero or more <em>intermediate operations</em> (which transform a
* stream into another stream, such as {@link Stream#filter(Predicate)}), and a
* <em>terminal operation</em> (which produces a result or side-effect, such
* as {@link Stream#count()} or {@link Stream#forEach(Consumer)}).
* Streams are lazy; computation on the source data is only performed when the
* terminal operation is initiated, and source elements are consumed only
* as needed.
为了执行计算,流会被执行到一个流管道当中。
一个流管道包含了:
一个源。(数字来的地方)
0个或多个中间操作(将一个stream转换成另外一个Stream)。
一个终止操作(会生成一个结果,或者是一个副作用(求和,遍历))。
流是延迟的,只有当终止操作被发起的时候,才会执行中间操作。
* <p>Collections and streams, while bearing some superficial similarities,
* have different goals. Collections are primarily concerned with the efficient
* management of, and access to, their elements. By contrast, streams do not
* provide a means to directly access or manipulate their elements, and are
* instead concerned with declaratively describing their source and the
* computational operations which will be performed in aggregate on that source.
* However, if the provided stream operations do not offer the desired
* functionality, the {@link #iterator()} and {@link #spliterator()} operations
* can be used to perform a controlled traversal.
集合和流虽然有一些相似性,但是他们的差异是不同的。
集合是为了高效对于元素的管理和访问。流并不会提供方式去直接操作流里的元素。(集合关注的是数据的管理,流关注的是元素内容的计算)
如果流操作并没有提供我们需要的功能,那么我们可以使用传统的iterator or spliterator去执行操作。
* <p>A stream pipeline, like the "widgets" example above, can be viewed as
* a <em>query</em> on the stream source. Unless the source was explicitly
* designed for concurrent modification (such as a {@link ConcurrentHashMap}),
* unpredictable or erroneous behavior may result from modifying the stream
* source while it is being queried.
一个流管道,可以看做是对流源的查询,除非这个流被显示的设计成可以并发修改的。否则会抛出异常。
(如一个线程对流进行修改,另一个对流进行查询)
* <p>Most stream operations accept parameters that describe user-specified
* behavior, such as the lambda expression {@code w -> w.getWeight()} passed to
* {@code mapToInt} in the example above. To preserve correct behavior,
* these <em>behavioral parameters</em>:
//为了能满足结果,需满足下边的条件。
* <ul>
* <li>must be <a href="package-summary.html#NonInterference">non-interfering</a>
* (they do not modify the stream source); and</li>
* <li>in most cases must be <a href="package-summary.html#Statelessness">stateless</a>
* (their result should not depend on any state that might change during execution
* of the stream pipeline).</li>
* </ul>
行为上的参数,大多是无状态的。
* <p>Such parameters are always instances of a
* <a href="../function/package-summary.html">functional interface</a> such
* as {@link java.util.function.Function}, and are often lambda expressions or
* method references. Unless otherwise specified these parameters must be
* <em>non-null</em>.
无一例外的。这种参数总是函数式接口的形式。也就是lambda表达式。除非特别指定,这些参数必须是非空的。
* <p>A stream should be operated on (invoking an intermediate or terminal stream
* operation) only once. This rules out, for example, "forked" streams, where
* the same source feeds two or more pipelines, or multiple traversals of the
* same stream. A stream implementation may throw {@link IllegalStateException}
* if it detects that the stream is being reused. However, since some stream
* operations may return their receiver rather than a new stream object, it may
* not be possible to detect reuse in all cases.
一个流只能被使用一次。对相同的流进行多次操作,需要创建多个流管道。
* <p>Streams have a {@link #close()} method and implement {@link AutoCloseable},
* but nearly all stream instances do not actually need to be closed after use.
* Generally, only streams whose source is an IO channel (such as those returned
* by {@link Files#lines(Path, Charset)}) will require closing. Most streams
* are backed by collections, arrays, or generating functions, which require no
* special resource management. (If a stream does require closing, it can be
* declared as a resource in a {@code try}-with-resources statement.)
流拥有一个closed方法,实现了AutoCloseable,在他的父类里。 最上面以举例实现。
但是一个流 除了是I/O流(因为持有句柄等资源)才需要被关闭外,是不需要被关闭的。
大多数的流底层是集合、数组或者是生成器函数。 他们并不需要特别的资源管理。如果需要被关闭,可以用try()操作。
* <p>Stream pipelines may execute either sequentially or in
* <a href="package-summary.html#Parallelism">parallel</a>. This
* execution mode is a property of the stream. Streams are created
* with an initial choice of sequential or parallel execution. (For example,
* {@link Collection#stream() Collection.stream()} creates a sequential stream,
* and {@link Collection#parallelStream() Collection.parallelStream()} creates
* a parallel one.) This choice of execution mode may be modified by the
* {@link #sequential()} or {@link #parallel()} methods, and may be queried with
* the {@link #isParallel()} method.
流管道可以被串行或者并行操作。这种模式只是一个属性而已。 初始化的时候会进行一个选择。
比如说 stream() 是串行流。parallelStream()是并行流。
还可以通过sequential()or parallel() 来进行修改。 以最后一个被调用的方法为准。
也可以用isParallel()来进行查询流是否是并行流。
* @param <T> the type of the stream elements
* @since 1.8
* @see IntStream
* @see LongStream
* @see DoubleStream
* @see <a href="package-summary.html">java.util.stream</a>
*/
public interface Stream<T> extends BaseStream<T, Stream<T>> {
// 具体举例, 源码中有例子
Stream<T> filter(Predicate<? super T> predicate); // 过滤
<R> Stream<R> map(Function<? super T, ? extends R> mapper); //映射
IntStream mapToInt(ToIntFunction<? super T> mapper);
LongStream mapToLong(ToLongFunction<? super T> mapper);
DoubleStream mapToDouble(ToDoubleFunction<? super T> mapper);
<R> Stream<R> flatMap(Function<? super T, ? extends Stream<? extends R>> mapper); //压平
IntStream flatMapToInt(Function<? super T, ? extends IntStream> mapper);
LongStream flatMapToLong(Function<? super T, ? extends LongStream> mapper);
DoubleStream flatMapToDouble(Function<? super T, ? extends DoubleStream> mapper);、
Stream<T> distinct();// 去重
Stream<T> sorted(); //排序
Stream<T> sorted(Comparator<? super T> comparator);
Stream<T> peek(Consumer<? super T> action);
Stream<T> limit(long maxSize); // 截断
void forEach(Consumer<? super T> action); // 遍历
void forEachOrdered(Consumer<? super T> action); // 遍历时执行操作
Object[] toArray(); // 转数组
T reduce(T identity, BinaryOperator<T> accumulator); // 汇聚, 返回一个汇聚的结果
<R> R collect(Supplier<R> supplier,
BiConsumer<R, ? super T> accumulator,
BiConsumer<R, R> combiner); // 收集器
。。。
}
自行参考父接口中的方法;
Stream中具体方法的详解
分割迭代器:
/**
* Returns a spliterator for the elements of this stream.
*
* <p>This is a <a href="package-summary.html#StreamOps">terminal
* operation</a>.
*
* @return the element spliterator for this stream
*/
Spliterator<T> spliterator();
Java8(6)spliterator及baseStream 源码讲解
在公司的学习笔记。
baseStream 源码讲解
BaseStream 是所有流的父类 。
/**
* Base interface for streams, which are sequences of elements supporting
* sequential and parallel aggregate operations. The following example
* illustrates an aggregate operation using the stream types {@link Stream}
* and {@link IntStream}, computing the sum of the weights of the red widgets:
*
* <pre>{@code
* int sum = widgets.stream()
* .filter(w -> w.getColor() == RED)
* .mapToInt(w -> w.getWeight())
* .sum();
* }</pre>
*
* See the class documentation for {@link Stream} and the package documentation
* for <a href="package-summary.html">java.util.stream</a> for additional
* specification of streams, stream operations, stream pipelines, and
* parallelism, which governs the behavior of all stream types.
*
* @param <T> the type of the stream elements
* @param <S> the type of of the stream implementing {@code BaseStream}
* @since 1.8
* @see Stream
* @see IntStream
* @see LongStream
* @see DoubleStream
* @see <a href="package-summary.html">java.util.stream</a>
*/
public interface BaseStream<T, S extends BaseStream<T, S>> extends AutoCloseable
public interface Stream<T> extends BaseStream<T, Stream<T>>
BaseStream(){
Iterator<T> iterator(); 迭代器
Spliterator<T> spliterator(); 分割迭代器 。 这是一个流的终止操作。
boolean isParallel(); 是否是并行。
S sequential(); // 返回一个等价的串行流。 返回S是一个新的流对象
S parallel(); //返回一个并行流。
S unordered(); // 返回一个无序的流。
S onClose(Runnable closeHandler); //当前流.onClose、 当close调用时,调用此方法。
void close(); // 关闭流
}
关闭处理器的举例
/**
* Returns an equivalent stream with an additional close handler. Close
* handlers are run when the {@link #close()} method
* is called on the stream, and are executed in the order they were
* added. All close handlers are run, even if earlier close handlers throw
* exceptions. If any close handler throws an exception, the first
* exception thrown will be relayed to the caller of {@code close()}, with
* any remaining exceptions added to that exception as suppressed exceptions
* (unless one of the remaining exceptions is the same exception as the
* first exception, since an exception cannot suppress itself.) May
* return itself.
*
* <p>This is an <a href="package-summary.html#StreamOps">intermediate
* operation</a>.
*
* @param closeHandler A task to execute when the stream is closed
* @return a stream with a handler that is run if the stream is closed
*/
S onClose(Runnable closeHandler);
public static void main(String[] args) {
List<String> list = Arrays.asList("hello","world");
NullPointerException nullPointerException = new NullPointerException("myexception");
try (Stream<String> stream = list.stream()){
stream.onClose(()->{
System.out.println("aaa");
// throw new NullPointerException("first");
throw nullPointerException;
}).onClose(()->{
System.out.println("aaa");
throw nullPointerException;
}).forEach(System.out::println);
}
// 出现异常会被压制,
// 如果是同一个异常对象,只会打印一次异常。 如果是多个异常对象。都会被打印。
}
javadoc 中的介绍比任何资料都详细。
Stream 源码分析。
stream();
/**
* Returns a sequential {@code Stream} with this collection as its source.
返回一个串行流,把这个集合当做源
* <p>This method should be overridden when the {@link #spliterator()}
* method cannot return a spliterator that is {@code IMMUTABLE},
* {@code CONCURRENT}, or <em>late-binding</em>. (See {@link #spliterator()}
* for details.)
当不能返回 三种方法 中的一个时,这个方法应该被重写。
* @implSpec
* The default implementation creates a sequential {@code Stream} from the
* collection's {@code Spliterator}.
默认会从集合中创建一个串行流。 返回
* @return a sequential {@code Stream} over the elements in this collection
* @since 1.8
*/
default Stream<E> stream() {
return StreamSupport.stream(spliterator(), false);
}
spliterator(); 分割迭代器
/**
* Creates a {@link Spliterator} over the elements in this collection.
*
* Implementations should document characteristic values reported by the
* spliterator. Such characteristic values are not required to be reported
* if the spliterator reports {@link Spliterator#SIZED} and this collection
* contains no elements.
* <p>The default implementation should be overridden by subclasses that
* can return a more efficient spliterator. In order to
* preserve expected laziness behavior for the {@link #stream()} and
* {@link #parallelStream()}} methods, spliterators should either have the
* characteristic of {@code IMMUTABLE} or {@code CONCURRENT}, or be
* <em><a href="Spliterator.html#binding">late-binding</a></em>.
默认的子类应该被重写。为了保留parallelStream 和 stream的延迟行为。特性需要满足IMMUTABLE 或者CONCURRENT
* If none of these is practical, the overriding class should describe the
* spliterator's documented policy of binding and structural interference,
* and should override the {@link #stream()} and {@link #parallelStream()}
* methods to create streams using a {@code Supplier} of the spliterator,
* as in:
* <pre>{@code
* Stream<E> s = StreamSupport.stream(() -> spliterator(), spliteratorCharacteristics)
* }</pre>
为什么叫分割迭代器。先分割,在迭代。
如果不能满足上述的要求,则重写的时候应该满足上述的需求、
* <p>These requirements ensure that streams produced by the
* {@link #stream()} and {@link #parallelStream()} methods will reflect the
* contents of the collection as of initiation of the terminal stream
* operation.
这些确保了流会返回的内容。
* @implSpec
* The default implementation creates a
* <em><a href="Spliterator.html#binding">late-binding</a></em> spliterator
* from the collections's {@code Iterator}. The spliterator inherits the
* <em>fail-fast</em> properties of the collection's iterator.
* <p>
* The created {@code Spliterator} reports {@link Spliterator#SIZED}.
默认会从集合的迭代器中创建出一个延迟的分割迭代器。 默认的迭代器 会有默认大小的迭代器。
* @implNote
* The created {@code Spliterator} additionally reports
* {@link Spliterator#SUBSIZED}.
*
* <p>If a spliterator covers no elements then the reporting of additional
* characteristic values, beyond that of {@code SIZED} and {@code SUBSIZED},
* does not aid clients to control, specialize or simplify computation.
* However, this does enable shared use of an immutable and empty
* spliterator instance (see {@link Spliterators#emptySpliterator()}) for
* empty collections, and enables clients to determine if such a spliterator
* covers no elements.
如果分割迭代器不包含任何元素。 其他的属性对客户端是没有任何帮助的。 然而会促进分割迭代器共享的作用。
* @return a {@code Spliterator} over the elements in this collection
* @since 1.8
*/
@Override
default Spliterator<E> spliterator() {
return Spliterators.spliterator(this, 0);
}
Spliterator javadoc
/**
* An object for traversing and partitioning elements of a source. The source
* of elements covered by a Spliterator could be, for example, an array, a
* {@link Collection}, an IO channel, or a generator function.
* <p>A Spliterator may traverse elements individually ({@link
* #tryAdvance tryAdvance()}) or sequentially in bulk
* ({@link #forEachRemaining forEachRemaining()}).
一个一个去遍历 tryAdvance() 或者 成块的遍历forEachRemaining()
*
* <p>A Spliterator may also partition off some of its elements (using
* {@link #trySplit}) as another Spliterator, to be used in
* possibly-parallel operations. Operations using a Spliterator that
* cannot split, or does so in a highly imbalanced or inefficient
* manner, are unlikely to benefit from parallelism. Traversal
* and splitting exhaust elements; each Spliterator is useful for only a single
* bulk computation.
Spliterator 可以对元素进行分区。 分成新的Spliterator。 并且以并行的操作来实行。
如果不能分割这些操作,则不能通过并行操作受益。
遍历和分割都会对那一小块是有用的、
* <p>A Spliterator also reports a set of {@link #characteristics()} of its
* structure, source, and elements from among {@link #ORDERED},
* {@link #DISTINCT}, {@link #SORTED}, {@link #SIZED}, {@link #NONNULL},
* {@link #IMMUTABLE}, {@link #CONCURRENT}, and {@link #SUBSIZED}. These may
* be employed by Spliterator clients to control, specialize or simplify
* computation. For example, a Spliterator for a {@link Collection} would
* report {@code SIZED}, a Spliterator for a {@link Set} would report
* {@code DISTINCT}, and a Spliterator for a {@link SortedSet} would also
* report {@code SORTED}. Characteristics are reported as a simple unioned bit
* set.
特性值:ORDERED 有序的, DISTINCT 不同的,SORTED 带排序的,SIZED 确定大小的,
NONNULL 非空的, IMMUTABLE, CONCURRENT ,SUBSIZED
这些特性可以在客户端使用,用来简化计算。是以位操作的形式来表示的。 collector中是以枚举形式来表示的。
* Some characteristics additionally constrain method behavior; for example if
* {@code ORDERED}, traversal methods must conform to their documented ordering.
* New characteristics may be defined in the future, so implementors should not
* assign meanings to unlisted values.
未来可能会定义一下新的特性。 实现者不应该赋予新的含义。
* <p><a name="binding">A Spliterator that does not report {@code IMMUTABLE} or
* {@code CONCURRENT} is expected to have a documented policy concerning:
* when the spliterator <em>binds</em> to the element source; and detection of
* structural interference of the element source detected after binding.</a> A
* <em>late-binding</em> Spliterator binds to the source of elements at the
* point of first traversal, first split, or first query for estimated size,
* rather than at the time the Spliterator is created. A Spliterator that is
* not <em>late-binding</em> binds to the source of elements at the point of
* construction or first invocation of any method. Modifications made to the
* source prior to binding are reflected when the Spliterator is traversed.
* After binding a Spliterator should, on a best-effort basis, throw
* {@link ConcurrentModificationException} if structural interference is
* detected. Spliterators that do this are called <em>fail-fast</em>. The
* bulk traversal method ({@link #forEachRemaining forEachRemaining()}) of a
* Spliterator may optimize traversal and check for structural interference
* after all elements have been traversed, rather than checking per-element and
* failing immediately.
当Spliterator 绑定到源上时 , 要考虑 IMMUTABLE 和 CONCURRENT 。
延迟迭代器会在第一次遍历或者分割或者查询大小的时候绑定在源上边。而不是在创建的时候就被绑定在源上了。
非延迟迭代器是在创建的时候就被绑定在源上了。
在迭代器绑定后对源进行了修改。迭代器就能反应出来抛出异常ConcurrentModificationException。
forEachRemaining()会优化遍历,是在所有元素都被操作之后进行操作,而不是一个元素一个元素的检测、
* <p>Spliterators can provide an estimate of the number of remaining elements
* via the {@link #estimateSize} method. Ideally, as reflected in characteristic
* {@link #SIZED}, this value corresponds exactly to the number of elements
* that would be encountered in a successful traversal. However, even when not
* exactly known, an estimated value value may still be useful to operations
* being performed on the source, such as helping to determine whether it is
* preferable to split further or traverse the remaining elements sequentially.
如果 特性值 SIZED 。 那么将要遍历的数量是确定的。
如果不包含SIZED,一个估算的值对于源的操作也是有帮助的。
* <p>Despite their obvious utility in parallel algorithms, spliterators are not
* expected to be thread-safe; instead, implementations of parallel algorithms
* using spliterators should ensure that the spliterator is only used by one
* thread at a time. This is generally easy to attain via <em>serial
* thread-confinement</em>, which often is a natural consequence of typical
* parallel algorithms that work by recursive decomposition. A thread calling
* {@link #trySplit()} may hand over the returned Spliterator to another thread,
* which in turn may traverse or further split that Spliterator. The behaviour
* of splitting and traversal is undefined if two or more threads operate
* concurrently on the same spliterator. If the original thread hands a
* spliterator off to another thread for processing, it is best if that handoff
* occurs before any elements are consumed with {@link #tryAdvance(Consumer)
* tryAdvance()}, as certain guarantees (such as the accuracy of
* {@link #estimateSize()} for {@code SIZED} spliterators) are only valid before
* traversal has begun.
分割迭代器并不确保是线程安全的。相反 应该确保分割迭代器一次被一个线程操作、
可以通过递归的方式来实现。
* <p>Primitive subtype specializations of {@code Spliterator} are provided for
* {@link OfInt int}, {@link OfLong long}, and {@link OfDouble double} values.
* The subtype default implementations of
* {@link Spliterator#tryAdvance(java.util.function.Consumer)}
* and {@link Spliterator#forEachRemaining(java.util.function.Consumer)} box
* primitive values to instances of their corresponding wrapper class. Such
* boxing may undermine any performance advantages gained by using the primitive
* specializations. To avoid boxing, the corresponding primitive-based methods
* should be used. For example,
* {@link Spliterator.OfInt#tryAdvance(java.util.function.IntConsumer)}
* and {@link Spliterator.OfInt#forEachRemaining(java.util.function.IntConsumer)}
* should be used in preference to
* {@link Spliterator.OfInt#tryAdvance(java.util.function.Consumer)} and
* {@link Spliterator.OfInt#forEachRemaining(java.util.function.Consumer)}.
* Traversal of primitive values using boxing-based methods
* {@link #tryAdvance tryAdvance()} and
* {@link #forEachRemaining(java.util.function.Consumer) forEachRemaining()}
* does not affect the order in which the values, transformed to boxed values,
* are encountered.
为了避免重复的装箱和拆箱,我们应该使用巨化的方法。 减少使用通用的方法。
*
* @apiNote
* <p>Spliterators, like {@code Iterator}s, are for traversing the elements of
* a source. The {@code Spliterator} API was designed to support efficient
* parallel traversal in addition to sequential traversal, by supporting
* decomposition as well as single-element iteration. In addition, the
* protocol for accessing elements via a Spliterator is designed to impose
* smaller per-element overhead than {@code Iterator}, and to avoid the inherent
* race involved in having separate methods for {@code hasNext()} and
* {@code next()}.
分割迭代器 就像迭代器一样。 用来遍历源当中的元素的。
Spliterator也支持并行的操作。方式是通过解耦,分解,单元素的遍历迭代。
Spliterator相比于Iterator来说,成本更低。tryAdvance()本质上也避免了hasNext()和next() 的资源上的竞争。
* <p>For mutable sources, arbitrary and non-deterministic behavior may occur if
* the source is structurally interfered with (elements added, replaced, or
* removed) between the time that the Spliterator binds to its data source and
* the end of traversal. For example, such interference will produce arbitrary,
* non-deterministic results when using the {@code java.util.stream} framework.
对于可变源来说,可能会出现问题
* <p>Structural interference of a source can be managed in the following ways
* (in approximate order of decreasing desirability):
* <ul>
* <li>The source cannot be structurally interfered with.
* <br>For example, an instance of
* {@link java.util.concurrent.CopyOnWriteArrayList} is an immutable source.
* A Spliterator created from the source reports a characteristic of
* {@code IMMUTABLE}.</li>
CopyOnWriteArrayList 适合于 读多写少的场景。 他是一个不可变的源。会返回一个特性值IMMUTABLE
* <li>The source manages concurrent modifications.
* <br>For example, a key set of a {@link java.util.concurrent.ConcurrentHashMap}
* is a concurrent source. A Spliterator created from the source reports a
* characteristic of {@code CONCURRENT}.</li>
创建并发源、 特性值 CONCURRENT
* <li>The mutable source provides a late-binding and fail-fast Spliterator.
* <br>Late binding narrows the window during which interference can affect
* the calculation; fail-fast detects, on a best-effort basis, that structural
* interference has occurred after traversal has commenced and throws
* {@link ConcurrentModificationException}. For example, {@link ArrayList},
* and many other non-concurrent {@code Collection} classes in the JDK, provide
* a late-binding, fail-fast spliterator.</li>
* <li>The mutable source provides a non-late-binding but fail-fast Spliterator.
* <br>The source increases the likelihood of throwing
* {@code ConcurrentModificationException} since the window of potential
* interference is larger.</li>
* <li>The mutable source provides a late-binding and non-fail-fast Spliterator.
* <br>The source risks arbitrary, non-deterministic behavior after traversal
* has commenced since interference is not detected.
* </li>
* <li>The mutable source provides a non-late-binding and non-fail-fast
* Spliterator.
* <br>The source increases the risk of arbitrary, non-deterministic behavior
* since non-detected interference may occur after construction.
* </li>
* </ul>
*
// 串行的例子:
* <p><b>Example.</b> Here is a class (not a very useful one, except
* for illustration) that maintains an array in which the actual data
* are held in even locations, and unrelated tag data are held in odd
* locations. Its Spliterator ignores the tags.
*
* <pre> {@code
* class TaggedArray<T> {
* private final Object[] elements; // immutable after construction
* TaggedArray(T[] data, Object[] tags) {
* int size = data.length;
* if (tags.length != size) throw new IllegalArgumentException();
* this.elements = new Object[2 * size];
* for (int i = 0, j = 0; i < size; ++i) {
* elements[j++] = data[i];
* elements[j++] = tags[i];
* }
* }
*
* public Spliterator<T> spliterator() {
* return new TaggedArraySpliterator<>(elements, 0, elements.length);
* }
*
* static class TaggedArraySpliterator<T> implements Spliterator<T> {
* private final Object[] array;
* private int origin; // current index, advanced on split or traversal
* private final int fence; // one past the greatest index
*
* TaggedArraySpliterator(Object[] array, int origin, int fence) {
* this.array = array; this.origin = origin; this.fence = fence;
* }
*
* public void forEachRemaining(Consumer<? super T> action) {
* for (; origin < fence; origin += 2)
* action.accept((T) array[origin]);
* }
*
// 让这个迭代器往前走
* public boolean tryAdvance(Consumer<? super T> action) {
* if (origin < fence) {
* action.accept((T) array[origin]);
* origin += 2;
* return true;
* }
* else // cannot advance
* return false;
* }
*
// 尝试进行分割, 尽量均匀的分割成两半. 不成功返回null
* public Spliterator<T> trySplit() {
* int lo = origin; // divide range in half
* int mid = ((lo + fence) >>> 1) & ~1; // force midpoint to be even
* if (lo < mid) { // split out left half
* origin = mid; // reset this Spliterator's origin
* return new TaggedArraySpliterator<>(array, lo, mid);
* }
* else // too small to split
* return null;
* }
*
* public long estimateSize() {
* return (long)((fence - origin) / 2);
* }
*
* public int characteristics() {
* return ORDERED | SIZED | IMMUTABLE | SUBSIZED;
* }
* }
* }}</pre>
*
// 并行的例子
* <p>As an example how a parallel computation framework, such as the
* {@code java.util.stream} package, would use Spliterator in a parallel
* computation, here is one way to implement an associated parallel forEach,
* that illustrates the primary usage idiom of splitting off subtasks until
* the estimated amount of work is small enough to perform
* sequentially. Here we assume that the order of processing across
* subtasks doesn't matter; different (forked) tasks may further split
* and process elements concurrently in undetermined order. This
* example uses a {@link java.util.concurrent.CountedCompleter};
* similar usages apply to other parallel task constructions.
*
* <pre>{@code
* static <T> void parEach(TaggedArray<T> a, Consumer<T> action) {
* Spliterator<T> s = a.spliterator();
* long targetBatchSize = s.estimateSize() / (ForkJoinPool.getCommonPoolParallelism() * 8);
* new ParEach(null, s, action, targetBatchSize).invoke();
* }
*
* static class ParEach<T> extends CountedCompleter<Void> {
* final Spliterator<T> spliterator;
* final Consumer<T> action;
* final long targetBatchSize;
*
* ParEach(ParEach<T> parent, Spliterator<T> spliterator,
* Consumer<T> action, long targetBatchSize) {
* super(parent);
* this.spliterator = spliterator; this.action = action;
* this.targetBatchSize = targetBatchSize;
* }
*
* public void compute() {
* Spliterator<T> sub;
* while (spliterator.estimateSize() > targetBatchSize &&
* (sub = spliterator.trySplit()) != null) {
* addToPendingCount(1);
* new ParEach<>(this, sub, action, targetBatchSize).fork();
* }
* spliterator.forEachRemaining(action);
* propagateCompletion();
* }
* }}</pre>
*
* @implNote
* If the boolean system property {@code org.openjdk.java.util.stream.tripwire}
* is set to {@code true} then diagnostic warnings are reported if boxing of
* primitive values occur when operating on primitive subtype specializations.
* 如果 {@code org.openjdk.java.util.stream.tripwire} 被设置成true, 则会给出警告。
* @param <T> the type of elements returned by this Spliterator
*
* @see Collection
* @since 1.8
*/
public interface Spliterator<T> {
// 。。。 下方列举几个
}
tryAdvance();
/**
* If a remaining element exists, performs the given action on it,
* returning {@code true}; else returns {@code false}. If this
* Spliterator is {@link #ORDERED} the action is performed on the
* next element in encounter order. Exceptions thrown by the
* action are relayed to the caller.
*
* @param action The action
* @return {@code false} if no remaining elements existed
* upon entry to this method, else {@code true}.
* @throws NullPointerException if the specified action is null
*/
//尝试的去前进。如果有下一个元素,则进行动作。
boolean tryAdvance(Consumer<? super T> action);
forEachRemaining();
/**
* Performs the given action for each remaining element, sequentially in
* the current thread, until all elements have been processed or the action
* throws an exception. If this Spliterator is {@link #ORDERED}, actions
* are performed in encounter order. Exceptions thrown by the action
* are relayed to the caller.
*
* @implSpec
* The default implementation repeatedly invokes {@link #tryAdvance} until
* it returns {@code false}. It should be overridden whenever possible.
*
* @param action The action
* @throws NullPointerException if the specified action is null
*/
// 针对于剩余的元素进行操作。
default void forEachRemaining(Consumer<? super T> action) {
do { } while (tryAdvance(action));
}
trySplit();
/**
* If this spliterator can be partitioned, returns a Spliterator
* covering elements, that will, upon return from this method, not
* be covered by this Spliterator.
* 如果这个分割迭代器能被分割,则返回一个新的被分割出来的Spliterator对象。
不会影响当前的spliterator
* <p>If this Spliterator is {@link #ORDERED}, the returned Spliterator
* must cover a strict prefix of the elements.
*如果Spliterator是有序的,则返回的也应该是有序的Spliterator
* <p>Unless this Spliterator covers an infinite number of elements,
* repeated calls to {@code trySplit()} must eventually return {@code null}.
除非Spliterator 返回的是无穷的元素,其余的最终返回一个null . 表示不能再继续分割了。
* Upon non-null return:
如果返回不为Null的话,
* <ul>
* <li>the value reported for {@code estimateSize()} before splitting,
* must, after splitting, be greater than or equal to {@code estimateSize()}
* for this and the returned Spliterator; and</li>
分割前的estimateSize>= 返回的estimateSize
* <li>if this Spliterator is {@code SUBSIZED}, then {@code estimateSize()}
* for this spliterator before splitting must be equal to the sum of
* {@code estimateSize()} for this and the returned Spliterator after
* splitting.</li>
如果大小是固定的。则分割后的 estimateSize 的总和 等于分割前的 estimateSize
* </ul>
*
* <p>This method may return {@code null} for any reason,
* including emptiness, inability to split after traversal has
* commenced, data structure constraints, and efficiency
* considerations.
*
* @apiNote
* An ideal {@code trySplit} method efficiently (without
* traversal) divides its elements exactly in half, allowing
* balanced parallel computation. Many departures from this ideal
* remain highly effective; for example, only approximately
* splitting an approximately balanced tree, or for a tree in
* which leaf nodes may contain either one or two elements,
* failing to further split these nodes. However, large
* deviations in balance and/or overly inefficient {@code
* trySplit} mechanics typically result in poor parallel
* performance.
*理想情况下,是从中间分割的,允许并行计算。很多情况下不满足这种的、
然而没有效率的分割,会降低效率
* @return a {@code Spliterator} covering some portion of the
* elements, or {@code null} if this spliterator cannot be split
*/
//尝试分割。
Spliterator<T> trySplit();
estimateSize();
/**
* Returns an estimate of the number of elements that would be
* encountered by a {@link #forEachRemaining} traversal, or returns {@link
* Long#MAX_VALUE} if infinite, unknown, or too expensive to compute.
*
* <p>If this Spliterator is {@link #SIZED} and has not yet been partially
* traversed or split, or this Spliterator is {@link #SUBSIZED} and has
* not yet been partially traversed, this estimate must be an accurate
* count of elements that would be encountered by a complete traversal.
* Otherwise, this estimate may be arbitrarily inaccurate, but must decrease
* as specified across invocations of {@link #trySplit}.
* 分的越少,estimateSize要比原来的个数要小。
* @apiNote
* Even an inexact estimate is often useful and inexpensive to compute.
* For example, a sub-spliterator of an approximately balanced binary tree
* may return a value that estimates the number of elements to be half of
* that of its parent; if the root Spliterator does not maintain an
* accurate count, it could estimate size to be the power of two
* corresponding to its maximum depth.
*一个不精算的数量也是有用的。
* @return the estimated size, or {@code Long.MAX_VALUE} if infinite,
* unknown, or too expensive to compute.
*/
//估算大小
long estimateSize();
characteristics();
/**
* Returns a set of characteristics of this Spliterator and its
* elements. The result is represented as ORed values from {@link
* #ORDERED}, {@link #DISTINCT}, {@link #SORTED}, {@link #SIZED},
* {@link #NONNULL}, {@link #IMMUTABLE}, {@link #CONCURRENT},
* {@link #SUBSIZED}. Repeated calls to {@code characteristics()} on
* a given spliterator, prior to or in-between calls to {@code trySplit},
* should always return the same result.
一个特性值的集合。
重复的调用characteristics 在spliterator之前或者当中,会返回相同的结果。
* <p>If a Spliterator reports an inconsistent set of
* characteristics (either those returned from a single invocation
* or across multiple invocations), no guarantees can be made
* about any computation using this Spliterator.
如果返回了一个不一致的特性值的集合。结果是不被保障的。
* @apiNote The characteristics of a given spliterator before splitting
* may differ from the characteristics after splitting. For specific
* examples see the characteristic values {@link #SIZED}, {@link #SUBSIZED}
* and {@link #CONCURRENT}.
*
* @return a representation of characteristics
*/
//特性值。
int characteristics();
hasCharacteristics();
/**
* Returns {@code true} if this Spliterator's {@link
* #characteristics} contain all of the given characteristics.
*
* @implSpec
* The default implementation returns true if the corresponding bits
* of the given characteristics are set.
* 默认的话,包含 会返回true
* @param characteristics the characteristics to check for
* @return {@code true} if all the specified characteristics are present,
* else {@code false}
*/
// 判断是否包含给定的特性值。
default boolean hasCharacteristics(int characteristics) {
return (characteristics() & characteristics) == characteristics;
}
getComparator();
/**
* If this Spliterator's source is {@link #SORTED} by a {@link Comparator},
* returns that {@code Comparator}. If the source is {@code SORTED} in
* {@linkplain Comparable natural order}, returns {@code null}. Otherwise,
* if the source is not {@code SORTED}, throws {@link IllegalStateException}.
*
* @implSpec
* The default implementation always throws {@link IllegalStateException}.
*
* @return a Comparator, or {@code null} if the elements are sorted in the
* natural order.
* @throws IllegalStateException if the spliterator does not report
* a characteristic of {@code SORTED}.
*/
//有序的话 返回一个Null。 其他情况 抛异常
default Comparator<? super T> getComparator() {
throw new IllegalStateException();
}
IMMUTABLE
/**
* Characteristic value signifying that the element source cannot be
* structurally modified; that is, elements cannot be added, replaced, or
* removed, so such changes cannot occur during traversal. A Spliterator
* that does not report {@code IMMUTABLE} or {@code CONCURRENT} is expected
* to have a documented policy (for example throwing
* {@link ConcurrentModificationException}) concerning structural
* interference detected during traversal.
*/
public static final int IMMUTABLE = 0x00000400; // 不能被修改的
CONCURRENT
/**
* Characteristic value signifying that the element source may be safely
* concurrently modified (allowing additions, replacements, and/or removals)
* by multiple threads without external synchronization. If so, the
* Spliterator is expected to have a documented policy concerning the impact
* of modifications during traversal.
*
* <p>A top-level Spliterator should not report both {@code CONCURRENT} and
* {@code SIZED}, since the finite size, if known, may change if the source
* is concurrently modified during traversal. Such a Spliterator is
* inconsistent and no guarantees can be made about any computation using
* that Spliterator. Sub-spliterators may report {@code SIZED} if the
* sub-split size is known and additions or removals to the source are not
* reflected when traversing.
*
* @apiNote Most concurrent collections maintain a consistency policy
* guaranteeing accuracy with respect to elements present at the point of
* Spliterator construction, but possibly not reflecting subsequent
* additions or removals.
*/
public static final int CONCURRENT = 0x00001000;
OfPrimitive
/**
* A Spliterator specialized for primitive values.
*
* @param <T> the type of elements returned by this Spliterator. The
* type must be a wrapper type for a primitive type, such as {@code Integer}
* for the primitive {@code int} type.
* @param <T_CONS> the type of primitive consumer. The type must be a
* primitive specialization of {@link java.util.function.Consumer} for
* {@code T}, such as {@link java.util.function.IntConsumer} for
* {@code Integer}.
* @param <T_SPLITR> the type of primitive Spliterator. The type must be
* a primitive specialization of Spliterator for {@code T}, such as
* {@link Spliterator.OfInt} for {@code Integer}.
*
* @see Spliterator.OfInt
* @see Spliterator.OfLong
* @see Spliterator.OfDouble
* @since 1.8
*/
public interface OfPrimitive<T, T_CONS, T_SPLITR extends Spliterator.OfPrimitive<T, T_CONS, T_SPLITR>>
extends Spliterator<T> {
@Override
T_SPLITR trySplit();
/**
* If a remaining element exists, performs the given action on it,
* returning {@code true}; else returns {@code false}. If this
* Spliterator is {@link #ORDERED} the action is performed on the
* next element in encounter order. Exceptions thrown by the
* action are relayed to the caller.
*
* @param action The action
* @return {@code false} if no remaining elements existed
* upon entry to this method, else {@code true}.
* @throws NullPointerException if the specified action is null
*/
@SuppressWarnings("overloads")
boolean tryAdvance(T_CONS action);
/**
* Performs the given action for each remaining element, sequentially in
* the current thread, until all elements have been processed or the
* action throws an exception. If this Spliterator is {@link #ORDERED},
* actions are performed in encounter order. Exceptions thrown by the
* action are relayed to the caller.
*
* @implSpec
* The default implementation repeatedly invokes {@link #tryAdvance}
* until it returns {@code false}. It should be overridden whenever
* possible.
*
* @param action The action
* @throws NullPointerException if the specified action is null
*/
@SuppressWarnings("overloads")
default void forEachRemaining(T_CONS action) {
do { } while (tryAdvance(action));
}
}
OfInt
/**
* A Spliterator specialized for {@code int} values.
* @since 1.8
*/
public interface OfInt extends OfPrimitive<Integer, IntConsumer, OfInt> {
@Override
OfInt trySplit();
@Override
boolean tryAdvance(IntConsumer action);
@Override
default void forEachRemaining(IntConsumer action) {
do { } while (tryAdvance(action));
}
/**
* {@inheritDoc}
* @implSpec
* If the action is an instance of {@code IntConsumer} then it is cast
* to {@code IntConsumer} and passed to
* {@link #tryAdvance(java.util.function.IntConsumer)}; otherwise
* the action is adapted to an instance of {@code IntConsumer}, by
* boxing the argument of {@code IntConsumer}, and then passed to
* {@link #tryAdvance(java.util.function.IntConsumer)}.
*/
@Override
default boolean tryAdvance(Consumer<? super Integer> action) {
if (action instanceof IntConsumer) {
return tryAdvance((IntConsumer) action);
}
else {
if (Tripwire.ENABLED)
Tripwire.trip(getClass(),
"{0} calling Spliterator.OfInt.tryAdvance((IntConsumer) action::accept)");
return tryAdvance((IntConsumer) action::accept);
}
}
/**
* {@inheritDoc}
* @implSpec
* If the action is an instance of {@code IntConsumer} then it is cast
* to {@code IntConsumer} and passed to
* {@link #forEachRemaining(java.util.function.IntConsumer)}; otherwise
* the action is adapted to an instance of {@code IntConsumer}, by
* boxing the argument of {@code IntConsumer}, and then passed to
* {@link #forEachRemaining(java.util.function.IntConsumer)}.
*/
@Override
default void forEachRemaining(Consumer<? super Integer> action) {
if (action instanceof IntConsumer) {
forEachRemaining((IntConsumer) action);
}
else {
if (Tripwire.ENABLED)
Tripwire.trip(getClass(),
"{0} calling Spliterator.OfInt.forEachRemaining((IntConsumer) action::accept)");
forEachRemaining((IntConsumer) action::accept);
}
}
}
IntConsumer 和 Consumer 是没有任何的关联关系 的。但是为什么能(IntConsumer)Consumer
因为jdk自带装箱拆箱操作。 int 和 integer 重叠了。
Java8(7)流源结构代码分析
流调用机制与原理大揭秘。
流执行操作时,先整理把中间的操作整合,当调用终止操作的时候,对每个元素单个的进行所有的操作。操作中还带有短路操作。
记录下来,然后给别人再讲,你就掌握了。
学完之后忘记了怎么办?记录下来。 笔记 博客。 死记硬背是没有任何用的。
ReferencePipeline
/**
* Abstract base class for an intermediate pipeline stage or pipeline source
* stage implementing whose elements are of type {@code U}.
*/
//引用管道
//ReferencePipeline 表示流的源阶段与中间阶段。
//ReferencePipeline.head表示流中的源阶段。
abstract class ReferencePipeline<P_IN, P_OUT>
extends AbstractPipeline<P_IN, P_OUT, Stream<P_OUT>>
implements Stream<P_OUT> {
}
AbstractPipeline
/**
* Abstract base class for "pipeline" classes, which are the core
* implementations of the Stream interface and its primitive specializations.
* Manages construction and evaluation of stream pipelines.
*
* <p>An {@code AbstractPipeline} represents an initial portion of a stream
* pipeline, encapsulating a stream source and zero or more intermediate
* operations. The individual {@code AbstractPipeline} objects are often
* referred to as <em>stages</em>, where each stage describes either the stream
* source or an intermediate operation.
流管道的初始的一部分。
*
* <p>A concrete intermediate stage is generally built from an
* {@code AbstractPipeline}, a shape-specific pipeline class which extends it
* (e.g., {@code IntPipeline}) which is also abstract, and an operation-specific
* concrete class which extends that. {@code AbstractPipeline} contains most of
* the mechanics of evaluating the pipeline, and implements methods that will be
* used by the operation; the shape-specific classes add helper methods for
* dealing with collection of results into the appropriate shape-specific
* containers.
*避免自动拆箱和装箱操作。
* <p>After chaining a new intermediate operation, or executing a terminal
* operation, the stream is considered to be consumed, and no more intermediate
* or terminal operations are permitted on this stream instance.
* 当链接完一个新的中间操作或者执行了终止操作之后, 这个流被认为被消费了。不允许再被操作了。
* @implNote
* <p>For sequential streams, and parallel streams without
* <a href="package-summary.html#StreamOps">stateful intermediate
* operations</a>, parallel streams, pipeline evaluation is done in a single
* pass that "jams" all the operations together. For parallel streams with
* stateful operations, execution is divided into segments, where each
* stateful operations marks the end of a segment, and each segment is
* evaluated separately and the result used as the input to the next
* segment. In all cases, the source data is not consumed until a terminal
* operation begins.
只有终止操作开始的时候,源数据才会被消费。
* @param <E_IN> type of input elements
* @param <E_OUT> type of output elements
* @param <S> type of the subclass implementing {@code BaseStream}
* @since 1.8
*/
abstract class AbstractPipeline<E_IN, E_OUT, S extends BaseStream<E_OUT, S>>
extends PipelineHelper<E_OUT> implements BaseStream<E_OUT, S> {
}
内部类,和lambda表达式之间的关系。
本质上 内部类和lambda不是一回事。只是能完成相同的操作。
lambda不是匿名内部类的语法糖,或者说是缩写。是一种新的形式。
public class LambdaTest {
//内部类,和lambda表达式之间的关系。
Runnable r1 = () -> System.out.println(this); // this表示当前类的对象
//匿名内部类
Runnable r2 = new Runnable() { //
@Override
public void run() {
System.out.println(this);
// this 表示匿名内部类的对象
}
};
public static void main(String[] args) {
LambdaTest lambdaTest = new LambdaTest();
Thread t1 = new Thread(lambdaTest.r1);
t1.start();
System.out.println("- - -- - ");
Thread t2 = new Thread(lambdaTest.r2);
t2.start();
//输出结果。
//com.sinosoft.lis.test.LambdaTest@62661526
//com.sinosoft.lis.test.LambdaTest$1@59a30351
}
}
使用了模板方法模式。
流是惰性的,是延迟操作的。遇到终止操作时,才会执行操作。
TerminalOp。 终止操作的接口类。
终止操作 只有四种类型, findOp foreachOp matchOp reduceOp
PipelineHelper
stream中间操作与终止操作层次体系分析与设计思想分析
中间操作
BaseStream -》 AbStractpipeline -》ReferencePipeline -》 Head || StatelessOP || statefulOp
最顶层的源 很多源的成员变量 管道 构造流源 无状态的中间操作 有状态的中间操作
流是惰性的,是延迟操作的。遇到终止操作时,才会执行操作。再没有终止操作之前,在整合中间操作(Sink)。
终止操作
TerminalOp -》 FindOp || ForeachOp || MatchOp || reduceOp
最顶层的
TerminalSink
终止的饮水槽。
Java8(8)时间日期API
joda-time
在开始学习jdk8.time之前,先接触一下joda-time。
public static void main(String[] args) {
// 基本使用方式.
DateTime today = new DateTime();
DateTime dateTime = today.plusDays(1);
//今天
System.out.println(today.toString("yyyy-MM-dd"));
//明天
System.out.println(dateTime.toString("yyyy-MM-dd"));
System.out.println("- - - - -");
//当月的第一天
DateTime dateTime1 = today.withDayOfMonth(1);
System.out.println(dateTime1.toString("yyyy-MM-dd"));
// 当前时间后边三个月的第后一天的日期
LocalDate localDate = new LocalDate();
localDate = localDate.plusMonths(3).dayOfMonth().withMaximumValue();
System.out.println(localDate);
// 当前时间后边三个月的第一天的日期
localDate = localDate.plusMonths(3).dayOfMonth().withMinimumValue();
System.out.println(localDate);
//计算两年前的第三个月的最后一天的时期
DateTime localDate1 = new DateTime();
localDate1.minusYears(2).monthOfYear().setCopy(3).dayOfMonth().withMaximumValue();
System.out.println(localDate1);
}
- example:
public class JodaTest2 {
// 标准UTC时间. 转换成日期类型 2014-11-11T02:22:22.222z
public static Date to2c(String date) {
//服务器端转换成客户端的时间
DateTime parse = DateTime.parse(date, DateTimeFormat.forPattern("yyyy-MM-dd'T'HH:mm:ss.SSSZ"));
return parse.toDate();
}
public static String toString(Date date) {
// 客户端的时间转成服务器的时间
DateTime date1 = new DateTime(date, DateTimeZone.UTC);
return date1.toString();
}
public static String date2String(Date date,String dateFort) {
DateTime dateTime = new DateTime(date);
return dateTime.toString(dateFort);
}
public static void main(String[] args) {
System.out.println(JodaTest2.to2c("2014-11-11T02:22:22.222z"));
System.out.println(JodaTest2.toString(new Date())); // 标准的时间为 差8个小时
System.out.println(JodaTest2.date2String(new Date(),"yyyy-MM-dd"));
}
}
Java中日期时间的api
Java8中的所有时间都是不可变的,确保了线程安全。
没有必要去研究源代码。会用就可以了。省下时间去学习更重要,更有价值的事情上。
{
public static void main(String[] args) {
LocalDate localDate = LocalDate.now();
System.out.println(localDate);
//获取年
System.out.println(localDate.getYear());
//获取月
System.out.println(localDate.getMonthValue());
//根据年月日构造
LocalDate localDate1 = LocalDate.of(2030, 3, 22);
System.out.println(localDate1);
//根据是时分秒构造
LocalDate localDate2 = LocalDate.of(2020,3,25);
MonthDay monthDay = MonthDay.of(localDate2.getMonth(), localDate2.getDayOfMonth());
//根据是时分秒构造
LocalTime localTime = LocalTime.now();
System.out.println(localTime);
// + 20分钟, -2个小时
LocalTime localTime1 = localTime.plusMinutes(20).minusHours(2);
System.out.println(localTime1);
System.out.println("- - - - -");
//现在的时间增加两周 (增加的长度,增加的单位)
LocalDate localDate3 = LocalDate.now().plus(2, ChronoUnit.WEEKS);
System.out.println(localDate3);
//现在的时间减两周
LocalDate localDate4 = localDate.minus(2, ChronoUnit.MONTHS);
System.out.println(localDate4);
// Clock对象
Clock clock = Clock.systemDefaultZone();
System.out.println(clock);
// 两个日期进行的判断
LocalDate localDate5 = LocalDate.now();
LocalDate localDate6 = LocalDate.of(2020,1,21);
System.out.println(localDate5.isBefore(localDate6));
System.out.println(localDate5.isAfter(localDate6));
System.out.println(localDate5.equals(localDate6));
//关于时区的概念.
Set<String> availableZoneIds = ZoneId.getAvailableZoneIds();
availableZoneIds.forEach(System.out::println);
//将上边的无序的时区set进行排序
Set treeSet = new TreeSet<String>(){
{addAll(availableZoneIds);}
};
treeSet.stream().forEach(System.out::println);
//使用时区做一些例子.
ZoneId zoneId = ZoneId.of("Asia/Shanghai");
LocalDateTime localDateTime = LocalDateTime.now();
System.out.println(localDateTime);
ZonedDateTime zonedDateTime = ZonedDateTime.of(localDateTime, zoneId);
System.out.println(zonedDateTime);
System.out.println("- - -- - -");
// 年月的对象
YearMonth yearMonth = YearMonth.now();
System.out.println(yearMonth);
System.out.println(yearMonth.lengthOfMonth());
System.out.println(yearMonth.isLeapYear());
YearMonth yearMonth1 = YearMonth.of(2019, 2);
System.out.println(yearMonth1);
System.out.println(yearMonth1.lengthOfMonth());
System.out.println(yearMonth1.lengthOfYear());
System.out.println(yearMonth1.isLeapYear()); // 是否闰年
LocalDate localDate7 = LocalDate.now();
LocalDate localDate8 = LocalDate.of(2017, 3, 22);
// Period 周期性的.. 比较两个年份的差别
Period period = Period.between(localDate7, localDate8); //
System.out.println(period);
System.out.println(period.getDays());
System.out.println("- - -- - - ");
// Instant 获取不带时区的UTC的标准时间.
System.out.println(Instant.now());
//,,, 剩下的用到的使用自行Google
}
}
Java8(回顾总结)
Java8的回顾和复盘
总共50节课,从开始到结束。学习到的不止是技术,更多的是学习方法。
系统的学习jdk8
- Java 8新特性介绍
- Lambda表达式介绍
- 使用Lambda表达式代替匿名内部类
- Lambda表达式的作用
- 外部迭代与内部迭代
- Java Lambda表达式语法详解
- 函数式接口详解
- 传递值与传递行为
- Stream深度解析
- Stream API详解
- 串行流与并行流
- Stream构成
- Stream源生成方式
- Stream操作类型
- Stream转换
- Optional详解
- 默认方法详解
- 方法与构造方法引用
- Predicate接口详解
- Function接口详解
- Consumer接口剖析
- Filter介绍
- Map-Reduce讲解、中间操作与终止操作
- 新的Date API分析
更多的时间是了解底层是怎么实现的。
基础的重要性
2020年02月07日12:03:41 将Java8学习的笔记给整理到了一个文件当中,方便整理。
2020你还不会Java8新特性?的更多相关文章
- 2020你还不会Java8新特性?方法引用详解及Stream 流介绍和操作方式详解(三)
方法引用详解 方法引用: method reference 方法引用实际上是Lambda表达式的一种语法糖 我们可以将方法引用看作是一个「函数指针」,function pointer 方法引用共分为4 ...
- 2020了你还不会Java8新特性?(六)Stream源码剖析
Stream流源码详解 节前小插曲 AutoCloseable接口: 通过一个例子 举例自动关闭流的实现. public interface BaseStream<T, S extends Ba ...
- 2020了你还不会Java8新特性?(五)收集器比较器用法详解及源码剖析
收集器用法详解与多级分组和分区 为什么在collectors类中定义一个静态内部类? static class CollectorImpl<T, A, R> implements Coll ...
- 【Java8新特性】还没搞懂函数式接口?赶快过来看看吧!
写在前面 Java8中内置了一些在开发中常用的函数式接口,极大的提高了我们的开发效率.那么,问题来了,你知道都有哪些函数式接口吗? 函数式接口总览 这里,我使用表格的形式来简单说明下Java8中提供的 ...
- java8新特性全面解析
在Java Code Geeks上有大量的关于Java 8 的教程了,像玩转Java 8--lambda与并发,Java 8 Date Time API 教程: LocalDateTime和在Java ...
- Java8新特性
Java8新特性 Java8主要的新特性涵盖:函数式接口.Lambda 表达式.集合的流式操作.注解的更新.安全性的增强.IO\NIO 的改进.完善的全球化功能等. 1.函数式接口 Java 8 引入 ...
- Java8 新特性之Stream----java.util.stream
这个包主要提供元素的streams函数操作,比如对collections的map,reduce. 例如: int sum = widgets.stream() .filter(b -> b.ge ...
- 这可能是史上最好的 Java8 新特性 Stream 流教程
本文翻译自 https://winterbe.com/posts/2014/07/31/java8-stream-tutorial-examples/ 作者: @Winterbe 欢迎关注个人微信公众 ...
- Java8 新特性 | 如何风骚走位防止空指针异常
文章整理翻译自 https://winterbe.com/posts/2015/03/15/avoid-null-checks-in-java/ 文章首发于个人网站: https://www.exce ...
随机推荐
- Java网络编程系列之TCP连接状态
1.TCP连接状态 LISTEN:Server端打开一个socket进行监听,状态置为LISTEN SYN_SENT:Client端发送SYN请求给Server端,状态由CLOSED变为SYN_SEN ...
- matplotlib绘制符合论文要求的图片
最近需要将实验数据画图出来,由于使用python进行实验,自然使用到了matplotlib来作图. 下面的代码可以作为画图的模板代码,代码中有详细注释,可根据需要进行更改. # -*- coding: ...
- Idea 注册方式,亲测可用
参考:https://www.cnblogs.com/aacoutlook/p/9036299.html 2018年3月 <License server>方式不能使用了,只好尝试<A ...
- input值
input里面的值为字符串(string)类型,所以用作数的计算的时候需要用Number(mInput.value)进行转换成数值Numbei()类型才可以计算 例如: mInput1.value + ...
- 【转】Spring面试问题集锦
Q. 对于依赖倒置原则(Dependency Inversion Principle,DIP),依赖注入(Dependency Injection,DI)和控制反转(Inversion of Cont ...
- echarts设置柱状图颜色渐变及柱状图粗细大小
series: [ { name: '值', type: 'bar', stack: '总量', //设置柱状图大小 barWidth : 20, //设置柱状图渐变颜色 itemStyle: { n ...
- Scala实践1
一.Scala安装和配置 1.1安装 Scala需要Java运行时库,安装Scala需要首先安装jdk. 然后在Scala官网下载 程序安装包 根据不同的操作系统选择不同的安装包,下载完成后,将安装包 ...
- Java set接口之HashSet集合原理讲解
Set接口 java.util.set接口继承自Collection接口,它与Collection接口中的方法基本一致, 并没有对 Collection接口进行功能上的扩充,只是比collection ...
- Java StringBuilder类
StringBuilder的原理 String类 字符串是常量,它们的值在创建之后不能更改 字符串的底层是一个被final修饰的数组,不能改变 private final byte[] value; ...
- await Task.Yield(); 超简单理解!
上面的代码类似于: Task.Run(() => { }).ContinueWith(t => Do(LoadData())); 意思就是: loadData 如果耗时较长那么上述代码会产 ...