简单的新闻客户端APP开发(DCloud+thinkphp+scrapy)
前端时间花了1个月左右,搞了个新闻APP,功能很简单,就是把页面版的新闻条目定时爬到后台数据库,然后用app显示出来。
1.客户端
使用了DCloud框架,js基本是个新手,从没写过像样的代码,html5更是新手,索性直接使用现成的前端框架。APPcan,APICloud尝试过,最终选择DCloud,话说它的HBuild编辑器确实不错。
贴一部分关键代码: 使用DCloud的下拉刷新方法,使用ajax获取后台返回的json列表;
1 <!DOCTYPE html>
2 <html>
3
4 <head>
5 <meta charset="utf-8">
6 <meta name="viewport" content="width=device-width,initial-scale=1,minimum-scale=1,maximum-scale=1,user-scalable=no" />
7 <title></title>
8 <script src="js/mui.min.js"></script>
9 <link href="css/mui.min.css" rel="stylesheet" />
10 <script type="text/javascript" charset="utf-8">
11 //mui.init();
12 var t;
13 mui.init({
14 pullRefresh: {
15 container: "#pullMine", //下拉刷新容器标识,querySelector能定位的css选择器均可,比如:id、.class等
16 down: {
17 contentdown: "下拉可以刷新", //可选,在下拉可刷新状态时,下拉刷新控件上显示的标题内容
18 contentover: "释放立即刷新", //可选,在释放可刷新状态时,下拉刷新控件上显示的标题内容
19 contentrefresh: "正在刷新...", //可选,正在刷新状态时,下拉刷新控件上显示的标题内容
20 callback: pulldownRefresh //必选,刷新函数,根据具体业务来编写,比如通过ajax从服务器获取新数据;
21 }
22 }
23 });
24
25 mui.plusReady(function() {
26 console.log("当前页面URL:" + plus.webview.currentWebview().getURL());
27 mui.ajax('http://202.110.123.123:801/newssystem/index.php/Home/News/getlist_sd', {
28 dataType: 'json',
29 type: 'get',
30 timeout: 10000,
31 success: function(data) {
32 t=data;
33 var list = document.getElementById("list");
34 var finallist = '';
35 for (i = data.length - 1; i >= 0; i--) {
36 finallist = finallist + '<li data-id="' + i + '" class="mui-table-view-cell" ><a class="mui-navigate-right"><div class="mui-media-body">' + data[i].title + '<p class="mui-ellipsis">' + data[i].pubtime + '</p></div></a></li>';
37 }
38 list.innerHTML = finallist;
39 console.log("no1"+finallist);
40 mui('#list').on('tap', 'li', function() {
41 mui.openWindow({
42 url: 'detail_sd.html',
43 id: 'detail_sd',
44 extras: {
45 title: t[this.getAttribute('data-id')].title,
46 author: t[this.getAttribute('data-id')].author,
47 pubtime: t[this.getAttribute('data-id')].pubtime,
48 content: t[this.getAttribute('data-id')].content
49 }
50 })
51
52 })
53 },
54 error: function() {}
55 })
56 })
57
58 //下拉刷新
59 //
60
61
62
63 /**
64 * 下拉刷新具体业务实现
65 */function pulldownRefresh() {
66 setTimeout(function() {
67 console.log("refreshing....");
68 mui.ajax('http://202.110.123.123:801/newssystem/index.php/Home/News/getlist_sd', {
69 dataType: 'json',
70 type: 'get',
71 timeout: 10000,
72 success: function(data) {
73 t=data;
74 var list = document.getElementById("list");
75 var finallist = '';
76 for (i = data.length - 1; i >= 0; i--) {
77 finallist = finallist + '<li data-id="' + i + '" class="mui-table-view-cell" ><a class="mui-navigate-right"><div class="mui-media-body">' + data[i].title + '<p class="mui-ellipsis">' + data[i].pubtime + '</p></div></a></li>';
78 // finallist=finallist+'<li data-id="'+i+'" class="mui-table-view-cell" ><a class="mui-navigate-right"><div class="mui-media-body">'+data[i].title+'<p class="mui-ellipsis">'+data[i].content+'</p></div></a></li>';
79 }
80 list.innerHTML = finallist;
81
82
83 },
84 error: function() {}
85 });
86 mui('#pullMine').pullRefresh().endPulldownToRefresh(); //refresh completed
87
88 }, 1500);
89 }
90 </script>
91 </head>
92
93 <body>
94
95 <!--<div id="pullMine" class="mui-content mui-scroll-wrapper">
96 <div class="mui-scroll">
97 <ul class="mui-table-view" id="list">
98
99 </ul>
</div>
</div>-->
<div id="pullMine" class="mui-content mui-scroll-wrapper">
<div class="mui-scroll">
<ul class="mui-table-view" id="list">
</ul>
</div>
</div>
</body>
115 </html>
2.后台PHP发布端
使用了thinkphp框架
1 <?php
2 namespace Home\Controller;
3 use Think\Controller;
4 class NewsController extends Controller {
5 public function getlist(){
6 $newsList=M('news')->order('pubtime asc')->limit(30)->select();
7 echo json_encode($newsList);
8 }
9 public function getlist_sd(){
$newsList=M('newssd')->order('pubtime asc')->limit(30)->select();
echo json_encode($newsList);
}
13 ?>
3.后台爬虫
使用了scrapy,爬取新闻内容写入DB
pipelines.py
1 # -*- coding: utf-8 -*-
2
3 # Define your item pipelines here
4 #
5 # Don't forget to add your pipeline to the ITEM_PIPELINES setting
6 # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
7
8 from scrapy import signals
9 import json
import codecs
from twisted.enterprise import adbapi
from datetime import datetime
from hashlib import md5
import MySQLdb
import MySQLdb.cursors
class JsonWithEncodingtutorialPipeline(object):
def __init__(self):
self.file = codecs.open('qdnews.json', 'w', encoding='utf-8')
def process_item(self, item, spider):
line = json.dumps(dict(item), ensure_ascii=False) + "\n"
self.file.write(line)
return item
def spider_closed(self, spider):
self.file.close()
class MySQLStoretutorialPipeline(object):
def __init__(self, dbpool):
self.dbpool = dbpool
print("-----------init sql proc---")
@classmethod
def from_settings(cls, settings):
dbargs = dict(
host=settings['MYSQL_HOST'],
db=settings['MYSQL_DBNAME'],
user=settings['MYSQL_USER'],
passwd=settings['MYSQL_PASSWD'],
charset='utf8',
cursorclass = MySQLdb.cursors.DictCursor,
use_unicode= True,
)
dbpool = adbapi.ConnectionPool('MySQLdb', **dbargs)
return cls(dbpool)
#pipeline默认调用
def process_item(self, item, spider):
d = self.dbpool.runInteraction(self._do_upinsert, item, spider)
d.addErrback(self._handle_error, item, spider)
d.addBoth(lambda _: item)
return d
#将每行更新或写入数据库中
def _do_upinsert(self, conn, item, spider):
print (item['link'][0])
linkmd5id = self._get_linkmd5id(item)
print linkmd5id
print("--------------")
now = datetime.now().replace(microsecond=0).isoformat(' ')
#now=datetime2timestamp(datetime.datetime.now())
conn.execute("""
select 1 from tp_news where linkmd5id = %s
""", (linkmd5id, ))
ret = conn.fetchone()
print ('ret=',ret)
if ret:
print ""
conn.execute("""
update tp_news set title = %s, content = %s, author = %s,pubtime = %s, pubtime2 = %s,link = %s, updated = %s where linkmd5id = %s
""", (item['title'][0][4:-5], item['content'][0], item['pubtime'][0][16:-4],item['pubtime'][0][-14:-4], item['pubtime'][0][-14:-4],item['link'][0], now, linkmd5id))
#print """
# update tp_news_2 set title = %s, description = %s, link = %s, listUrl = %s, updated = %s where linkmd5id = %s
#""", (item['title'], item['desc'], item['link'], item['listUrl'], now, linkmd5id)
else:
print ''
conn.execute("""
insert into tp_news(linkmd5id, title, content, author,link, updated, pubtime, pubtime2)
values(%s, %s, %s, %s, %s,%s,%s,%s)
""", (linkmd5id, item['title'][0][4:-5], item['content'][0], item['pubtime'][0][16:-4],item['link'][0], now,item['pubtime'][0][-14:-4], item['pubtime'][0][-14:-4]))
#print """
# insert into tp_news_2(linkmd5id, title, description, link, listUrl, updated)
# values(%s, %s, %s, %s, %s, %s)
#""", (linkmd5id, item['title'], item['desc'], item['link'], item['listUrl'], now)
#获取url的md5编码
def _get_linkmd5id(self, item):
#url进行md5处理,为避免重复采集设计
s=md5(item['link'][0]).hexdigest()
#print (s)
#print(md5(item['link']).hexdigest())
return s
#异常处理
def _handle_error(self, failue, item, spider):
93 log.err(failure)
items.py
1 # -*- coding: utf-8 -*-
2
3 # Define here the models for your scraped items
4 #
5 # See documentation in:
6 # http://doc.scrapy.org/en/latest/topics/items.html
7
8 import scrapy
9
class DmozItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
pubtime=scrapy.Field()
title=scrapy.Field()
link=scrapy.Field()
desc=scrapy.Field()
content=scrapy.Field()
19 id=scrapy.Field()
spiders.py
1 from scrapy.spider import BaseSpider
2 from scrapy.selector import HtmlXPathSelector
3 from tutorial.items import DmozItem
4 from scrapy.http import Request
5 from scrapy.utils.response import get_base_url
6 from scrapy.utils.url import urljoin_rfc
7 from urllib2 import urlopen
8 from BeautifulSoup import BeautifulSoup
9
10 from scrapy.spiders import CrawlSpider
11 from scrapy.loader import ItemLoader
12 from scrapy.linkextractors.sgml import SgmlLinkExtractor
13
14
15 import scrapy
16 class DmozSpider(BaseSpider):
17 name = "dmoz"
18 allowed_domains = ["dmoz.org"]
19 start_urls = [
20 "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
21 "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
22 ]
23 def parse(self, response):
24 # filename = response.url.split("/")[-2]
25 # open(filename, 'wb').write(response.body)
26 hxs=HtmlXPathSelector(response)
27 sites=hxs.select('//ul/li')
28 items=[]
29 for site in sites:
30 item=DmozItem()
31 item['title']=site.select('a/text()').extract()
32 item['link']=site.select('a/@href').extract()
33 item['desc']=site.select('text()').extract()
34 items.append(item)
35 return items
36
37 class DmozSpider2(BaseSpider):
38 name = "dmoz2"
39 allowed_domains = ["10.60.32.179"]
40 start_urls = [
41 "http://10.60.32.179/Site/Site1/myindex.shtml",
42 #"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
43 ]
44 def parse(self, response):
45 # filename = response.url.split("/")[-2]
46 # open(filename, 'wb').write(response.body)
47 hxs=HtmlXPathSelector(response)
48 sites=hxs.select('//*[@id="_ctl0_LblContent"]/div/div//ul/li')
49 items=[]
50 for site in sites:
51 item=DmozItem()
52 item['date']=site.select('span/text()').extract()
53 item['title']=site.select('a/text()').extract()
54 item['link']=site.select('a/@href').extract()
55 item['desc']=site.select('text()').extract()
56 items.append(item)
57 return items
58
59
60 class MySpider(BaseSpider):
61 name = "myspider"
62 allowed_domains = ["10.60.32.179"]
63 start_urls = [
64 'http://10.60.32.179/Site/Site1/myindex.shtml',
65 #'http://example.com/page2',
66 ]
67 def parse(self, response):
68 # collect `item_urls`
69 hxs=HtmlXPathSelector(response)
70 item_urls=hxs.select('//*[@id="_ctl0_LblContent"]/div/div//ul/li')
71 base_url = get_base_url(response)
72 items=[]
73 for item_url in item_urls:
74
75 yield Request(url=response.url, callback=self.parse_item,meta={'items': items})
76
77 def parse_item(self, response):
78 hxs=HtmlXPathSelector(response)
79 item_urls=hxs.select('//*[@id="_ctl0_LblContent"]/div/div//ul/li')
80
81 item=DmozItem()
82 items=response.meta['items']
83 item['date']=item_urls.select('span/text()').extract()
84 item['title']=item_urls.select('a/text()').extract()
85 item['link']=item_urls.select('a/@href').extract()
86 item['desc']=item_urls.select('text()').extract()
87
88 # item_details_url=item['link']
89 # populate `item` fields
90 relative_url=item_urls.select('a/@href').extract()
91 print(relative_url[0])
92 base_url = get_base_url(response)
93 item_details_url=urljoin_rfc(base_url, relative_url[0])
94 yield Request(url=item_details_url,callback=self.parse_details,dont_filter=True,meta={'item':item,'items':items})
95
96 def parse_details(self, response):
97 #item = response.meta['item']
98 # populate more `item` fields
99 print("***********************In parse_details()***************")
hxs=HtmlXPathSelector(response)
print("-------------------------------")
print(response.url)
item_detail=hxs.select('/html/body/center/div/div[4]/div[1]/p[1]').extract()
print("________________",item_detail)
item=response.meta['item']
item['detail']=item_detail
items=response.meta['items']
items.append[item]
return items
class DmozSpider3(BaseSpider):
name = "dmoz3"
allowed_domains = ["10.60.32.179"]
start_urls = [
'http://10.60.32.179/Site/Site1/myindex.shtml',
]
def parse(self, response):
hxs=HtmlXPathSelector(response)
sites=hxs.select('//*[@id="_ctl0_LblContent"]/div/div//ul/li')
items=[]
for site in sites:
item=DmozItem()
item['date']=site.select('span/text()').extract()
item['title']=site.select('a/text()').extract()
item['link']=site.select('a/@href').extract()
item['desc']=site.select('text()').extract()
print(item['link'][0])
base_url = get_base_url(response)
relative_url=item['link'][0]
item_details_url=urljoin_rfc(base_url, relative_url)
print("*********************",item_details_url)
#response2=BeautifulSoup(urlopen(item_details_url).read())
response2=scrapy.http.Response(item_details_url)
hxs2=HtmlXPathSelector(response2)
item['detail']=hxs2.select('/html/body/center/div/div[4]/div[1]/p[1]').extract()
items.append(item)
return items
class MySpider5(BaseSpider):
name = "myspider5"
allowed_domains = ["10.60.32.179"]
start_urls = [
'http://10.60.32.179/Site/Site1/myindex.shtml',
#'http://example.com/page2',
]
items=[]
item=DmozItem()
def parse(self, response):
# collect `item_urls`
hxs=HtmlXPathSelector(response)
item_urls=hxs.select('//*[@id="_ctl0_LblContent"]/div/div//ul/li')
base_url = get_base_url(response)
for item_url in item_urls:
MySpider5.item['date']=item_url.select('span/text()').extract()
MySpider5.item['title']=item_url.select('a/text()').extract()
MySpider5.item['link']=item_url.select('a/@href').extract()
MySpider5.item['desc']=item_url.select('text()').extract()
relative_url=MySpider5.item['link']
print(relative_url[0])
base_url = get_base_url(response)
item_details_url=urljoin_rfc(base_url, relative_url[0])
print 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=', str(item_details_url)
yield Request(url=item_details_url, callback=self.parse_details)
# def parse_item(self, response):
# hxs=HtmlXPathSelector(response)
# item_urls=hxs.select('//*[@id="_ctl0_LblContent"]/div/div//ul/li')
# # item_details_url=item['link']
# # populate `item` fields
# relative_url=item_urls.select('a/@href').extract()
# print(relative_url[0])
# base_url = get_base_url(response)
# item_details_url=urljoin_rfc(base_url, relative_url[0])
# print 'item urls============================================================='
# yield Request(url=item_details_url,callback=self.parse_details,dont_filter=True,meta={'item':item,'items':items})
def parse_details(self, response):
#item = response.meta['item']
# populate more `item` fields
print("***********************In parse_details()***************")
hxs=HtmlXPathSelector(response)
print("----------------------------------------------------------------")
print(response.url)
item_detail=hxs.select('/html/body/center/div/div[4]/div[1]/p[1]').extract()
print("________________",item_detail)
#item=response.meta['item']
#item['detail']=item_detail
#items.append(item)
MySpider5.item['detail']=item_detail
# item['detail']=item_detail
MySpider5.items.append(MySpider5.item)
return MySpider5.item
def parse_details2(self, response):
#item = response.meta['item']
# populate more `item` fields
bbsItem_loader=ItemLoader(item=DmozItem(),response=response)
url=str(response.url)
bbsItem_loader.add_value('title',item['title'])
abc={
'detail':'/html/body/center/div/div[4]/div[1]/p[1]'}
bbsItem_loader.add_xpath('detail',abc['detail'])
return bbsItem_loader.load_item()
class MySpider6(CrawlSpider):
name = "myspider6"
allowed_domains = ["10.60.32.179"]
start_urls = [
'http://10.60.32.179/Site/Site1/myindex.shtml',
#'http://example.com/page2',
]
link_extractor={
# 'page':SgmlLinkExtractor(allow='/bbsdoc,board,\w+\.html$'),
# 'page_down':SgmlLinkExtractor(allow='/bbsdoc,board,\w+,page,\d+\.html$'),
'page':SgmlLinkExtractor(allow='/Article/\w+\/\w+\.shtml$'),
}
_x_query={
'date':'span/text()',
'date2':'/html/body/center/div/div[4]/p',
'title':'a/text()',
'title2':'/html/body/center/div/div[4]/h2'
}
_y_query={
'detail':'/html/body/center/div/div[4]/div[1]/p[1]',
}
def parse(self,response):
self.t=0
for link in self.link_extractor['page'].extract_links(response):
yield Request(url=link.url,callback=self.parse_content)
self.t=self.t+1
def parse_content(self,response):
bbsItem_loader=ItemLoader(item=DmozItem(),response=response)
url=str(response.url)
bbsItem_loader.add_value('desc',url)
bbsItem_loader.add_value('link',url)
bbsItem_loader.add_xpath('title',self._x_query['title2'])
bbsItem_loader.add_xpath('pubtime',self._x_query['date2'])
bbsItem_loader.add_xpath('content',self._y_query['detail'])
bbsItem_loader.add_value('id',self.t) #why not useful?
return bbsItem_loader.load_item()
class MySpider6SD(CrawlSpider):
name = "myspider6sd"
allowed_domains = ["10.60.7.45"]
start_urls = [
'http://10.60.7.45/SITE_sdyc_WEB/Site1219/index.shtml',
#'http://example.com/page2',
]
link_extractor={
# 'page':SgmlLinkExtractor(allow='/bbsdoc,board,\w+\.html$'),
# 'page_down':SgmlLinkExtractor(allow='/bbsdoc,board,\w+,page,\d+\.html$'),
'page':SgmlLinkExtractor(allow='/Article/\w+\/\w+\.shtml$'),
#http://10.60.32.179/Site/Col411/Article/201510/35770_2015_10_29_8058797.shtml
#http://10.60.7.45/SITE_sdyc_WEB/Col1527/Article/201510/sdnw_2110280_2015_10_29_91353216.shtml
}
_x_query={
'date':'span/text()',
'date2':'/html/body/center/div/div[4]/p',
'title':'a/text()',
#'title2':'/html/body/center/div/div[4]/h2'
'title2':'/html/body/div[4]/div[1]/div[2]/div[1]/h1[2]/font'
#'author':'/html/body/div[4]/div[1]/div[2]/div[1]/div/span[1]'
#'pubtime2':'/html/body/div[4]/div[1]/div[2]/div[1]/div/span[2]'
}
_y_query={
#'detail':'/html/body/center/div/div[4]/div[1]/p[1]',
'detail':'//*[@id="Zoom"]'
}
def parse(self,response):
self.t=0
for link in self.link_extractor['page'].extract_links(response):
yield Request(url=link.url,callback=self.parse_content)
self.t=self.t+1
def parse_content(self,response):
bbsItem_loader=ItemLoader(item=DmozItem(),response=response)
url=str(response.url)
bbsItem_loader.add_value('desc',url)
bbsItem_loader.add_value('link',url)
bbsItem_loader.add_xpath('title',self._x_query['title2'])
bbsItem_loader.add_xpath('pubtime',self._x_query['date2'])
bbsItem_loader.add_xpath('content',self._y_query['detail'])
bbsItem_loader.add_value('id',self.t) #why not useful?
336 return bbsItem_loader.load_item()
简单的新闻客户端APP开发(DCloud+thinkphp+scrapy)的更多相关文章
- 史上最简单的个人移动APP开发入门--jQuery Mobile版跨平台APP开发
书是人类进步的阶梯. ——高尔基 习大大要求新新人类要有中国梦,鼓励大学生们一毕业就创业.那最好的创业途径是什么呢?就是APP.<构建跨平台APP-jQuery Mobile移动应用实战> ...
- OuNews 简单的新闻客户端应用源码
一直想练习MVP模式开发应用,把学习的RxJava.Retrofit等热门的开源库结合起来,于是写了这么一款新闻阅读软件, 有新闻.图片.视频三大模块,使用Retrofit和Okhttp实现无网读缓存 ...
- 指令汇B新闻客户端开发(三) 下拉刷新
现在我们继续这个新闻客户端的开发,今天分享的是下拉刷新的实现,我们都知道下拉刷新是一个应用很常见也很实用的功能.我这个应用是通过拉ListView来实现刷新的,先看一张刷新的原理图 从图中可知,手指移 ...
- 简单记账本APP开发一
在对Android的一些基础的知识有了一定了解,以及对于AndroidStudio的如何使用有了 一定的熟悉后,决定做一个简单的记账本APP 开发流程 1.记账本的页面 2.可以添加新的账目 (一)页 ...
- 开源:我的Android新闻客户端,速度快、体积小、支持离线阅读、操作简便、内容展现形式丰富多样、信息量大、功能全面 等(要代码的留下邮箱)
分享:我的Android新闻客户端,速度快.体积小.支持离线阅读.操作简便.内容展现形式丰富多样.信息量大.功能全面 等(要代码的留下邮箱) 历时30天我为了开发这个新闻客户端APP,以下简称觅闻 h ...
- IOS版新闻客户端应用源码项目
IOS版新闻客户端应用源码,这个是一款简单的新闻客户端源码,该应用实现没采用任何第三方类库的 ,并且这个应用的UI做得很不错的,值得我们的参考和学习,希望大家可以更加完善这款新闻类的应用吧. 源码下载 ...
- app开发中如何利用sessionId来实现服务端与客户端保持回话
app开发中如何利用sessionId来实现服务端与客户端保持回话 这个问题太过于常见,也过于简单,以至于大部分开发者根本没有关注过这个问题,我根据和我沟通的开发者中,总结出来常用的方法有以下几种: ...
- android 学习随笔九(网络:简单新闻客户端实现)
1.简单新闻客户端 <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xm ...
- 跨平台移动端APP开发---简单高效的MUI框架
MUI是dcloud(数字天堂)公司旗下的一款跨平台开发移动APP的框架产品,在学习MUI框架之前,最先接触了Hbuilder代码编辑器,它带给我的第一感觉是快,这是HBuilder的最大优势,通过完 ...
随机推荐
- 用switch判断月份的练习
import java.util.Scanner; public class SwitchTest01 { public static void main(String[] args) { Syste ...
- vim命令---存阅
命令历史 以:和/开头的命令都有历史纪录,可以首先键入:或/然后按上下箭头来选择某个历史命令. 启动vim 在命令行窗口中输入以下命令即可 vim 直接启动vim vim filename 打开vim ...
- SQL存储过程+游标 循环批量()操作数据
本人收集的,挺有用的 1. 利用游标循环更新.删除MemberAccount表中的数据 DECLARE My_Cursor CURSOR --定义游标 FOR (SELECT * FROM dbo.M ...
- ORACLE日期加减【转】
首先,感谢这个作者的辛勤汗水给我们带来的总结,因为日期函数操作对平时的使用真的是很常用,所以收藏一下以作后期使用. 原贴地址:http://www.cnblogs.com/xiao-yu/archiv ...
- iOS 开发之EXC_BAD_ACCESS异常分析
一:EXC_BAD_ACCESS异常介绍在调试objective-c程序的过程中,程序crash的现象在所难免,但大部分的错误都能够通过显示的错误原因结合NSLog的方式来解决,比如NSInvalid ...
- PHP FTP
安装 PHP 的 Windows 版本内置了对 FTP 扩展的支持.无需加载任何附加扩展库即可使用 FTP 函数. 然而,如果您运行的是 PHP 的 Linux 版本,在编译 PHP 的时候请添加 - ...
- inline-block元素的空白间距解决方法
方法1 <ul><li>item1</li><li>item2</li><li>item3</li><li&g ...
- 解决使用angularjs时页面因为{{ }}闪烁的两种方式ng-bind,ng-cloak
1.HTML加载含有{{ }}语法的元素后并不会立刻渲染它们,导致未渲染内容闪烁(Flash of Unrendered Content,FOUC).我可以用ng-bind将内容同元素绑定在一起避免F ...
- /etc/host 配置主机名字
每个机子中的hosts文件都应有下面域IP对应的文件
- jquery中的html()、text()、val()的区别
1.html(),text(),val()三种方法都是用来读取选定元素的内容: html()是用来读取元素的HTML内容(包括其Html标签),text()用来读取元素的纯文本内容,包括其后代元素 ...