Applied Spatiotemporal Data Mining应用时空数据挖掘
Course description
With the continuing advances of geographic information science and geospatial
technologies, spatially referenced information have been easily and increasingly
available in the past decades and becoming important information sources in
scientific research and decision making processes. To effectively take advantage
of the rich collection of spatial (and temporal) data, statistical analysis is
often necessary, e.g., to extract implicit knowledge such as spatial relations and
patterns that are not explicit in the data. Spatial data analysis distinguishes
itself from classical data analysis in that spatial analysis focuses on locations,
areas, distances, relationships and interactions of measurements that are usually
referenced as points, lines, and areal units in geographical spaces. In the
past decades, a plethora of theory, methods and tools of spatial analysis have
been developed from different perspectives, and converged as fruitful fields of
geographic information science (GIScience) and spatial statistics.
The purpose of this class is to present the commonly used methods and current
trends in spatial and spatiotemporal data analysis, and innovative applications
in relevant fields (e.g., environmental science and engineering, natural resources
management, ecology, public health, climate sciences, civil engineering, and social
sciences). In this class, we will review the basic principles in spatiotemporal
analysis and modeling, and discuss commonly used methods and tools. Students
are expected to actively participate in class lecture, complete lab assignments,
read assigned articles and develop a project of their own choice or directly related
to their thesis/dissertation topics. The following topics will be covered in the
class, but can be adjustable to meet the students’ interests:
• Exploratory spatial data analysis
• Space-time geostatistics
• Spatial point process and species distribution modeling
• Spatiotemporal disease mapping
• Time series map analysis and change detection
Prerequisites
Prerequisites of this course includes an understanding of basic concepts of spatial
analysis and statistics, which could be fulfilled with basic statistics courses or
graduate level of GIS course. Students from different disciplines are welcome,
please contact the instructor should there any question about the prerequisites.
Learning outcomes
After completing this course, the students of this class are expected to be able
to:
• formulate real-world problems in the context of spatial and spatiotemporal
analysis with a knowledge of basic concepts and principles in this field;
• understand commonly used concepts and methods in statistical analysis of
spatiotemporal data;
• apply appropriate spatial and spatiotemporal analytical methods to solve
the formulated problems, and be able to critically review alternative
methods;
• utilize programmable scientific computing tools (e.g., R) to make maps,
solve spatial and spatiotemporal analysis problems, and evaluate and assess
the results of alternative methods;
Readings
• A reading list of articles will be provided. The following books will be
frequently referred to for reading:
– Bivand Roger S., Pebesma, Edzer J., and Gómez-Rubio, Virgilio
(2008), Applied Spatial Data Analysis with R, Springer (eBook available at TTU library).
– Cressie, N., & Wikle, C. K. (2011). Statistics for Spatio-temporal
Data. John Wiley & Sons.
Sample course outline
Day Sample topics Readings Hours
1 Class overview and introduction Handouts 3
2 Point pattern analysis Ch.7 BPG 3
3 Species distribution modeling Handouts 3
4 Space-time geostatistics Ch.8 BPG 3
5 Spatiotemporal regression Ch.9,10 BPG 3
6 Time series map analysis Handouts 3
7 Discussion and student presentation 5
Background of Instructor
Dr. Guofeng Cao is an Assistant Professor in the Department of Geosciences
at Texas Tech University. His research interests include geographic information
science and systems (GIS), cyberGIS and spatiotemporal statistics, with a
primary focus on statistical learning of complex spatial and spatiotemporal
patterns across different domains. His research has been supported by different
funding agency. He has published 45 peer-reviewed papers including 30 journal
articles. He received a B.S. in Earth Science from Zhejiang University, an M.S. in
GIS from Chinese Academy of Science, and a M.A. in Statistics and a Ph.D. in
Geography from the University of California, Santa Barbara. He also had several
years of industrial experiences before moving back to academia.
https://github.com/surfcao/summer2018-cug
--
title: "Day 1: Use R as GIS"
output: github_document
---
```{r global_options, results='asis', warning=FALSE}
knitr::opts_chunk$set(fig.width=12, fig.height=8, fig.path='Figs/', warning=FALSE, message=FALSE)
```
```{r load, echo=F, eval=T}
rm(list=ls())
x <- c("sp", "rgdal", "rgeos", "maptools", "classInt", "RColorBrewer", "GISTools", "maps", "raster", "ggmap")
#install.packages(x) # warning: this may take a number of minutes
lapply(x, library, character.only = TRUE) #load the required packages
```
# Spatial Objects
| | Without attributes | With attributes |
| ----- | ------------------ | -------------- |
|Points | SpatialPoints | SpatialPointsDataFrame|
|Lines | SpatialLines | SpatialLinesDataFrame|
|Polygons | SpatialPolygons | SpatialPolygonsDataFrame|
|Raster | SpatialGrid | SpatialGridDataFrame|
|Raster | SpatialPixels | SpatialPixelsDataFrame|
```{r load_library1, echo=T, eval=T}
LubbockBlock<-readShapePoly("Data/LubbockBlockNew.shp") #read polygon shapefile
class(LubbockBlock)
HouseLocation<-read.csv("Data/HouseLatLon.csv") #read GPS data
class(HouseLocation)
coordinates(HouseLocation)<-c('Lon', 'Lat')
class(HouseLocation)
cropland<-raster("Data/Lubbock_CDL_2013_USDA.tif")
class(cropland)
tmin <- getData("worldclim", var = "tmin", res = 10) # this will download
class(tmin)
```
```{r load_library2, echo=T, eval=T}
LubbockBlock<-readOGR("./Data", "LubbockBlockNew") #read polygon shapefile
class(LubbockBlock)
```
# Mapping with R
## Basic Mapping
```{r mapping, echo=T, eval=T}
LubbockBlock<-readShapePoly("Data/LubbockBlockNew.shp") #read polygon shapefile
plot(LubbockBlock,axes=TRUE, col=alpha("gray70", 0.6)) #plot Lubbock block shapefile
#add title, scalebar, north arrow, and legend
HouseLocation<-read.csv("Data/HouseLatLon.csv") #read GPS data
price<-HouseLocation$TotalPrice
nclr<-5
priceclr<-brewer.pal(nclr, "Reds")
class<-classIntervals(price, nclr, style="quantile")
clocode<-findColours(class, priceclr)
points(HouseLocation$Lon, HouseLocation$Lat, pch=19, col=clocode, cex=0.5) #add houses on top of Lubbock block shapefile
title(main="Houses on Sale in Lubbock, 2014")
legend(-101.95, 33.65, legend=names(attr(clocode, "table")), fill =attr(clocode, "palette"), cex=0.5, bty="n")
#map.scale(x=-101.85, y=33.49,0.001,"Miles",4,0.5,sfcol='red')
north.arrow(xb=-101.95, yb=33.65, len=0.005, lab="N")
#plot raster
plot(cropland)
#plot raster stack
tmin <- getData("worldclim", var = "tmin", res = 10) # this will download
plot(tmin)
```
## Mapping with static Google Maps
```{R mapping2, echo=F, eval=F}
library(RgoogleMaps)
lubbock=geocode('lubbock')
newmap <- GetMap(center = c(lubbock$lat, lubbock$lon), zoom = 12, destfile = "newmap.png", maptype = "roadmap")
PlotOnStaticMap(newmap, lat=HouseLocation$Lat, lon=HouseLocation$Lon, col='red')
lubbock<-SpatialPolygons(LubbockBlock@polygons, proj4string=CRS("+init=EPSG:4326"))
PlotPolysOnStaticMap(newmap, lubbock, col=alpha('blue', 0.2))
```
## Mapping with dynamic Google Maps
```{R mapping3, echo=F, eval=F}
library(plotGoogleMaps)
data(meuse)
coordinates(meuse)=~x+y
proj4string(meuse) = CRS('+init=epsg:28992')
plotGoogleMaps(meuse, filename='meuse.html')
HouseLocation<-read.csv("Data/HouseLatLon.csv") #read GPS data
coordinates(HouseLocation)<-c('Lon', 'Lat')
proj4string(HouseLocation)=CRS('+init=EPSG:4326')
plotGoogleMaps(HouseLocation, filename='house.html')
ic = iconlabels(meuse$zinc, height=12)
plotGoogleMaps(meuse, iconMarker=ic, mapTypeId='ROADMAP', filename='meuse2.html')
#plot raster
data(meuse.grid)
coordinates(meuse.grid)<-c('x', 'y')
meuse.grid<-as(meuse.grid, 'SpatialPixelsDataFrame')
proj4string(meuse.grid) <- CRS('+init=epsg:28992')
mapMeuseCl<- plotGoogleMaps(meuse.grid,zcol= 'dist',at=seq(0,0.9,0.1),colPalette= brewer.pal(9,"Reds"), filename='meuse3.html')
#plot polygons
proj4string(LubbockBlock)=CRS("+init=epsg:4326")
m<-plotGoogleMaps(LubbockBlock,zcol="Pop2010",filename= 'MyMap6.htm' , mapTypeId= ' TERRAIN ' ,colPalette= brewer.pal(7,"Reds"), strokeColor="white")
#plot line
meuse.grid<-as(meuse.grid, 'SpatialPixelsDataFrame')
im<-as.image.SpatialGridDataFrame(meuse.grid[ 'dist' ])
cl<-ContourLines2SLDF(contourLines(im))
proj4string(cl) <- CRS( '+init=epsg:28992')
mapMeuseCl<- plotGoogleMaps(cl,zcol= 'level' ,strokeWeight=1:9, filename= 'myMap6.htm',mapTypeId= 'ROADMAP')
```
## Changing map projections
```{r projection, eval=T }
#project a vector
boudary=readShapePoly('Data/boundary');
proj4string(boudary) <-CRS("+proj=utm +zone=17 +datum=WGS84 +units=m +no_defs +ellps=WGS84 +towgs84=0,0,0")
proj4string(boudary)
boudaryProj<-spTransform(boudary, CRS("+init=epsg:3857"))
proj4string(boudaryProj)
#project a raster
proj4string(cropland)
plot(cropland)
aea <- CRS("+init=ESRI:102003") #Albert equal area
projCropland=projectRaster(cropland, crs=aea)
plot(projCropland)
```
# Spatial analysis with R
```{r load_library4, echo=F, eval=T}
#subsetting a spatial dataframe
LubbockBlock<-readOGR("./Data", "LubbockBlockNew") #read polygon shapefile
selection = LubbockBlock[LubbockBlock$Pop2010>500,]
plot(selection)
#select by clicking
selected = click(LubbockBlock)
extent = drawExtent()
extent=as(extent,'SpatialPolygons')
proj4string(extent)=proj4string(selection)
# performace erase
plot(erase(selection, extent))
poly = drawPoly()
proj4string(poly) = proj4string(LubbockBlock)
# performe clip
cropselection = crop(LubbockBlock,poly)
plot(cropselection)
```
## vector analysis (overlay)
```{r vector, echo=T, eval=T }
#project a vector
# Datasets
# * CSV table of (fictionalized) brown bear sightings in Alaska, each
# containing an arbitrary ID and spatial location specified as a
# lat-lon coordinate pair.
# * Polygon shapefile containing the boundaries of US National Parks
# greater than 100,000 acres in size.
bears <- read.csv("Data/bear-sightings.csv")
coordinates(bears) <- c("longitude", "latitude")
# read in National Parks polygons
parks <- readOGR("Data", "10m_us_parks_area")
# tell R that bear coordinates are in the same lat/lon reference system as the parks data
proj4string(bears) <- proj4string(parks)
# combine is.na() with over() to do the containment test; note that we
# need to "demote" parks to a SpatialPolygons object first
inside.park <- !is.na(over(bears, as(parks, "SpatialPolygons")))
# calculate what fraction of sightings were inside a park
mean(inside.park)
## [1] 0.1720648
# determine which park contains each sighting and store the park name as an attribute of the bears data
bears$park <- over(bears, parks)$Unit_Name
# draw a map big enough to encompass all points, then add in park boundaries superimposed upon a map of the United States
plot(bears)
map("world", region="usa", add=TRUE)
plot(parks, border="green", add=TRUE)
legend("topright", cex=0.85, c("Bear in park", "Bear not in park", "Park boundary"), pch=c(16, 1, NA), lty=c(NA, NA, 1), col=c("red", "grey", "green"), bty="n")
title(expression(paste(italic("Ursus arctos"), " sightings with respect to national parks")))
# plot bear points with separate colors inside and outside of parks
points(bears[!inside.park, ], pch=1, col="gray")
points(bears[inside.park, ], pch=16, col="red")
# write the augmented bears dataset to CSV
write.csv(bears, "bears-by-park.csv", row.names=FALSE)
# ...or create a shapefile from the points
writeOGR(bears, ".", "bears-by-park", driver="ESRI Shapefile")
```
## Raster analysis
```{r raster, eval=T, echo=T}
tmin=getData('worldclim', var='tmin', res=10)
# Raster calculator
diff=tmin$tmin1 - tmin$tmin10
## the following code is faster for large datasets.
overlay(tmin$tmin1, tmin$tmin10, fun=function(x,y){return (x-y)})
elevation <- getData("alt", country = "ESP")
slope <- terrain(elevation, opt = "slope")
aspect <- terrain(elevation, opt = "aspect")
hill <- hillShade(slope, aspect, 40, 270)
plot(hill, col = grey(0:100/100), legend = FALSE, main = "Spain")
plot(elevation, col = rainbow(25, alpha = 0.35), add = TRUE)
#contours
contour(elevation)
```
```{r raster2, eval=F, echo=T}
#crop raster
plot(hill, col = grey(0:100/100), legend = FALSE, main = "Spain")
plot(elevation, col = rainbow(25, alpha = 0.35), add = TRUE)
extent=drawExtent()
cropElev <- crop(elevation, extent)
plot(cropElev)
```
Applied Spatiotemporal Data Mining应用时空数据挖掘的更多相关文章
- 数据挖掘(data mining),机器学习(machine learning),和人工智能(AI)的区别是什么? 数据科学(data science)和商业分析(business analytics)之间有什么关系?
本来我以为不需要解释这个问题的,到底数据挖掘(data mining),机器学习(machine learning),和人工智能(AI)有什么区别,但是前几天因为有个学弟问我,我想了想发现我竟然也回答 ...
- data Mining with Weka: Trailer More Data Mining with Weka 用weka 进行数据挖掘 Weka 用weka 进行更多数据挖掘
https://www.youtube.com/user/WekaMOOC 大学公开课 视频教程 weka 入门教程 data Mining with Weka: Trailer More Dat ...
- Machine Learning and Data Mining(机器学习与数据挖掘)
Problems[show] Classification Clustering Regression Anomaly detection Association rules Reinforcemen ...
- Weka 3: Data Mining Software in Java
官方网站: Weka 3: Data Mining Software in Java 相关使用方法博客 WEKA使用教程(经典教程转载) (实例数据:bank-data.csv) Weka初步一.二. ...
- data mining,machine learning,AI,data science,data science,business analytics
数据挖掘(data mining),机器学习(machine learning),和人工智能(AI)的区别是什么? 数据科学(data science)和商业分析(business analytics ...
- Data Mining的十种分析方法——摘自《市场研究网络版》谢邦昌教授
Data Mining的十种分析方法: 记忆基础推理法(Memory-Based Reasoning:MBR) 记忆基础推理法最主要的概念是用已知的案例(case)来预测未来案例的一些属 ...
- 搭建Data Mining环境(Spark版本)
前言:工欲善其事,必先利其器.倘若不懂得构建一套大数据挖掘环境,何来谈Data Mining!何来领悟“Data Mining Engineer”中的工程二字!也仅仅是在做数据分析相关的事罢了!此文来 ...
- Tinghua Data Mining
Learning Resources 书籍: 期刊: 业界先驱: 开阔视野,掌握业界最新动态. 工具: 数据挖掘是很多学科的综合体: 甭管叫什么名字,归根到底都是数据挖掘: Comprehensive ...
- 论文翻译:Data mining with big data
原文: Wu X, Zhu X, Wu G Q, et al. Data mining with big data[J]. IEEE transactions on knowledge and dat ...
随机推荐
- Spring数据库主从分离
1.spring+spring mvc +mybatis+druid 实现数据库主从分离 2.Spring+MyBatis主从读写分离 3.MyCat痛点 4.Spring+MyBatis实现数据库读 ...
- spring boot-15.缓存
为了减轻数据库压力和提高访问速度,从spring3.1开始映入了基于注解的缓存机制. 1.Java Caching定义了5个核心接口,分别是CachingProvider, CacheManager, ...
- 小记---------网页采集之Jsoup
Jsoup是一款Java解析器,相当于httpClient解析器 功能:①:从一个URL,文件或字符串中解析HTML ②:使用DOM或CSS选择器来查找.取出数据 ...
- 【6.28校内test】T1 Jelly的难题1
Jelly的难题[题目链接] 废话一句:今天中考出成绩,感觉大家考的都超级棒,不管怎样,愿大家成为最好的自己. 好了废话完了,下面是题解部分: SOLUTION: 首先你可能发生的,是看不懂题: 定睛 ...
- 中值滤波器(平滑空间滤波器)基本原理及Python实现
1. 基本原理 一种典型的非线性滤波器就是中值滤波器,它使用像素的一个领域内的灰度的中值来代替该像素的值.中值滤波器通常是处理椒盐噪声的一种有效的手段. 2. 测试结果 图源自skimage 3. 代 ...
- [LeetCode] 226. 用队列实现栈
题目链接: https://leetcode-cn.com/problems/implement-stack-using-queues 难度:简单 通过率:59.9% 题目描述: 使用队列实现栈的下列 ...
- 学会这些 pycharm 编程小技巧,编程效率提升 10 倍
PyCharm 是一款非常强大的编写 python 代码的工具.掌握一些小技巧能成倍的提升写代码的效率,本篇介绍几个经常使用的小技巧. 一.分屏展示 当你想同时看到多个文件的时候: 1.右击标签页: ...
- 【转载】Django自带的注册登陆功能
1.登陆 知识点: a.auth.authenticate(username=name值, password=password值) 验证用户名和密码 b.auth.login(request, use ...
- springboot(十五)-Runner启动器
Runner启动器 如果你想在Spring Boot启动的时候运行一些特定的代码,你可以实现接口ApplicationRunner或者CommandLineRunner,这两个接口实现方式一样,它们都 ...
- crm---本项目的权限控制模式
一:url权限: 最底层的权限控制,,缺点在与没有预判的机制,造成客户体验下降. 前提: 为controller中的每一个方法(即资源)定义一个资源(Resource)名称,,该 ...