Twitter数据抓取的方法(三)
Scraping Tweets Directly from Twitters Search – Update
Published August 1, 2015
Sorry for my delayed response to this as I’ve seen several comments on this topic, but I’ve been pretty busy with some other stuff recently, and this is the first chance I’ve had to address this!
As with most web scraping, at some point a provider will change their source code and scrapers will break. This is something that Twitter has done with their recent site redesign. Having gone over the changes, there are two that effect this scraping script.
The first change is tiny. Originally, to get all tweets rather than “top tweet”, we used the type_param “f” to denote “realtime”. However, the value for this has changed to just “tweets”.
Second change is a bit trickier to counter, as the scroll_cursor parameter no longer exists. Instead, if we look at the AJAX call that Twitter makes on its infinite scroll, we get a different parameter:
max_position:TWEET-399159003478908931-606844263347945472-BD1UO2FFu9QAAAAAAAAETAAAAAcAAAASAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
The highlighted parameter there, “max_position”, looks very similar to the original scroll_cursor parameter. However, unlike the scroll_cursor which existed in the response to be extracted, we have to create this one ourself.
As can be seen from the example, we have “TWEET” followed by two sets of numbers, and what appears to be “BD1UO2FFu9” screaming and falling off a cliff. The good news is, we actually only need the first three components.
“TWEET” will always stay the same, but the two sets of numbers are actually tweet ID’s, representing the oldest to most recently created tweets you’ve extracted.
For our newest tweet (2nd number set), we only need to extract this once as we can keep it the same for all calls, as Twitter does.
The oldest tweet (1st number set), we need to extract the last tweet id in our results each time to change our max_position value.
So, lets take a look at some of the code I’ve changed:
String minTweet = null; |
Rather than our original scroll_cursor value, we now have “minTweet”. Initially this is set to null, as we don’t have one to begin with. On our first call though, we get the first tweet in our response, and set the ID to minTweet, if minTweet is still null.
Next, we need to get the maxTweet. As previously said before, we get this by getting the last tweet in our results, and returning that ID. So we don’t repeat results, we need to make sure that the minTweet does not equal the maxTweet ID, and if not, we construct our “max_position” query with the format “TWEET-{maxTweetId}-{minTweetId}”.
You’ll also notice I changed the SCROLL_CURSOR_PARAM to “max_position” from “scroll_cursor”. Normally I’d change the variable name as well, but for visual reference, I’ve kept it the same for now, so you know where to change it.
Also, in constructUrl, the TYPE_PARAM value has also been set to “tweets”.
Finally, make sure you modify your TwitterResponse class so that it mirrors the parameters that are returned by the JSON file.
All you need to do is replace the original class variables with these, and update the constructor and getter/setter fields:
private boolean has_more_items; |
Twitter数据抓取的方法(三)的更多相关文章
- Twitter数据抓取的方法(一)
Scraping Tweets Directly from Twitters Search Page – Part 1 Published January 8, 2015 EDIT – Since I ...
- Twitter数据抓取的方法(二)
Scraping Tweets Directly from Twitters Search Page – Part 2 Published January 11, 2015 In the previo ...
- Twitter数据抓取
说明:这里分三个系列介绍Twitter数据的非API抓取方法.有兴趣的QQ群交流: BitCrawler网络爬虫QQ群 322937592 1.Twitter数据抓取(一) 2.Twitter数据抓取 ...
- Twitter数据非API采集方法
说明:这里分三个系列介绍Twitter数据的非API抓取方法. 在一个老外的博看上看到的,想详细了解的可以自己去看原文. 这种方法可以采集基于关键字在twitter上搜索的结果推文,已经实现自动翻页功 ...
- python爬虫数据抓取方法汇总
概要:利用python进行web数据抓取方法和实现. 1.python进行网页数据抓取有两种方式:一种是直接依据url链接来拼接使用get方法得到内容,一种是构建post请求改变对应参数来获得web返 ...
- 数据抓取的艺术(三):抓取Google数据之心得
本来是想把这部分内容放到前一篇<数据抓取的艺术(二):数据抓取程序优化>之中.但是随着任务的完成,我越来越感觉到其中深深的趣味,现总结如下: (1)时间 时间是一个与抓取规模相形而 ...
- 数据抓取的艺术(一):Selenium+Phantomjs数据抓取环境配置
数据抓取的艺术(一):Selenium+Phantomjs数据抓取环境配置 2013-05-15 15:08:14 分类: Python/Ruby 数据抓取是一门艺术,和其他软件不同,世界上 ...
- 【Python入门只需20分钟】从安装到数据抓取、存储原来这么简单
基于大众对Python的大肆吹捧和赞赏,作为一名Java从业人员,我本着批判与好奇的心态买了本python方面的书<毫无障碍学Python>.仅仅看了书前面一小部分的我......决定做一 ...
- Python爬虫工程师必学——App数据抓取实战 ✌✌
Python爬虫工程师必学——App数据抓取实战 (一个人学习或许会很枯燥,但是寻找更多志同道合的朋友一起,学习将会变得更加有意义✌✌) 爬虫分为几大方向,WEB网页数据抓取.APP数据抓取.软件系统 ...
随机推荐
- python之SQLAlchemy ORM 上
前言: SQLAlchmey是暑假学的,当时学完后也没及时写博客整理下.这篇博客主要介绍下SQLAlchemy及基本操作,写完后有空做个堡垒机小项目.下篇博客整理写篇关于Web框架和django基础~ ...
- loadrunner入门篇-Controller控制器
Controller组件是LR的控制中心,主要包括场景设计和场景执行两部分.在VuGen中编辑完脚本并将脚本加载到Controller组件中,即开始对脚本运行时的场景进行设计,当场景设计完成后,即可执 ...
- hashMap4种遍历方式
package collection; import java.util.Collection; import java.util.HashMap; import java.util.Hashtabl ...
- Python拉勾爬虫——以深圳地区数据分析师为例
拉勾因其结构化的数据比较多因此过去常常被爬,所以在其多次改版之下变得难爬.不过只要清楚它的原理,依然比较好爬.其机制主要就是AJAX异步加载JSON数据,所以至少在搜索页面里翻页url不会变化,而且数 ...
- db2 load乱码问题
在使用db2过程中经常需要从一个库里拿数据到自己库里来,通常需要将源表的数据导为数据文件,通过数据文件load到自己库里. 这个过程如果两个库的字符编码不一致,常规导入导出就会出现中文乱码. 以下是两 ...
- 在.NET项目中使用PostSharp,实现AOP面向切面编程处理
PostSharp是一种Aspect Oriented Programming 面向切面(或面向方面)的组件框架,适用在.NET开发中,本篇主要介绍Postsharp在.NET开发中的相关知识,以及一 ...
- 关于JAVA中抽象类和接口的区别辨析
今天主要整理一下新学习的有关于Java中抽象类和接口的相关知识和个人理解. 1 抽象类 用来描述事物的一般状态和行为,然后在其子类中去实现这些状态和行为.也就是说,抽象类中的方法,需要在子类中进行重写 ...
- struts2知识点复习
一. MVC Model 1:将所有的程序代码,都写到JSP页面中. Model 2:JSP(流程控制.数据显示) + JavaBean 改进的Model2:Servlet(流程控制) + Jsp(数 ...
- mysql数据库开启慢查询日志
修改配置文件 在配置文件my.ini中加上下面两句话 log-slow-queries = C:\xampp\mysql_slow_query.log long_query_time=3 第一句使用来 ...
- poj 2892---Tunnel Warfare(线段树单点更新、区间合并)
题目链接 Description During the War of Resistance Against Japan, tunnel warfare was carried out extensiv ...