• 分类模型的预测目标是:类别编号
  • 回归模型的预测目标是:实数变量

回归模型种类

  • 线性模型

    • 最小二乘回归模型
    • 应用L2正则化时--岭回归(ridge regression)
    • 应用L1正则化时--LASSO(Least Absolute Shrinkage and Selection Operator)
  • 决策树
    • 不纯度度量方法:方差

0 准备数据

archive.ics.uci.edu/ml/machine-learning-databases/00275/Bike-Sharing-Dataset.zip

  1. sed 1d hour.csv > hour_noheader.csv

0 运行环境

  1. export SPARK_HOME=/Users/erichan/garden/spark-1.5.1-bin-hadoop2.6
  2. export PYTHONPATH=${SPARK_HOME}/python/:${SPARK_HOME}/python/lib/py4j-0.8.2.1-src.zip
  3. cd $SPARK_HOME
  4. IPYTHON=1 IPYTHON_OPTS="--pylab" ./bin/pyspark --driver-memory 4G --executor-memory 4G --driver-cores 2
  1. from pyspark.mllib.regression import LabeledPoint
  2. from pyspark.mllib.regression import LinearRegressionWithSGD
  3. from pyspark.mllib.tree import DecisionTree
  4. import numpy as np

1 抽取特征

  1. PATH = "/Users/erichan/sourcecode/book/Spark机器学习"
  2. raw_data = sc.textFile("%s/Bike-Sharing-Dataset/hour_noheader.csv" % PATH)
  3. num_data = raw_data.count()
  4. records = raw_data.map(lambda x: x.split(","))
  5. first = records.first()
  6. print first
  7. print num_data

[u'1', u'2011-01-01', u'1', u'0', u'1', u'0', u'0', u'6', u'0', u'1', u'0.24', u'0.2879', u'0.81', u'0', u'3', u'13', u'16']

17379

1.1 转换为二元向量

  1. # cache the dataset to speed up subsequent operations
  2. records.cache()
  3. def get_mapping(rdd, idx):
  4. return rdd.map(lambda fields: fields[idx]).distinct().zipWithIndex().collectAsMap()
  5. print "Mapping of first categorical feasture column: %s" % get_mapping(records, 2)

Mapping of first categorical feasture column: {u'1': 0, u'3': 1, u'2': 2, u'4': 3}

  1. mappings = [get_mapping(records, i) for i in range(2,10)]
  2. cat_len = sum(map(len, mappings))
  3. num_len = len(records.first()[11:15])
  4. total_len = num_len + cat_len
  5. print "Feature vector length for categorical features: %d" % cat_len
  6. print "Feature vector length for numerical features: %d" % num_len
  7. print "Total feature vector length: %d" % total_len

Feature vector length for categorical features: 57

Feature vector length for numerical features: 4

Total feature vector length: 61

1.2 创建线性模型特征向量

  1. # 提取特征
  2. def extract_features(record):
  3. cat_vec = np.zeros(cat_len)
  4. i = 0
  5. step = 0
  6. for field in record[2:9]:
  7. m = mappings[i]
  8. idx = m[field]
  9. cat_vec[idx + step] = 1
  10. i = i + 1
  11. step = step + len(m)
  12. num_vec = np.array([float(field) for field in record[10:14]])
  13. return np.concatenate((cat_vec, num_vec))
  14. # 提取标签
  15. def extract_label(record):
  16. return float(record[-1])
  17. data = records.map(lambda r: LabeledPoint(extract_label(r), extract_features(r)))
  18. first_point = data.first()
  19. print "Raw data: " + str(first[2:])
  20. print "Label: " + str(first_point.label)
  21. print "Linear Model feature vector:\n" + str(first_point.features)
  22. print "Linear Model feature vector length: " + str(len(first_point.features))

Raw data: [u'1', u'0', u'1', u'0', u'0', u'6', u'0', u'1', u'0.24', u'0.2879', u'0.81', u'0', u'3', u'13', u'16']

Label: 16.0

Linear Model feature vector:
[1.0,0.0,0.0,0.0,0.0,1.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,1.0,0.0,0.0,0.0,0.0,0.24,0.2879,0.81,0.0]

Linear Model feature vector length: 61

1.3 创建决策树模型特征向量

  1. def extract_features_dt(record):
  2. return np.array(map(float, record[2:14]))
  3. data_dt = records.map(lambda r: LabeledPoint(extract_label(r), extract_features_dt(r)))
  4. first_point_dt = data_dt.first()
  5. print "Decision Tree feature vector: " + str(first_point_dt.features)
  6. print "Decision Tree feature vector length: " + str(len(first_point_dt.features))

Decision Tree feature vector: [1.0,0.0,1.0,0.0,0.0,6.0,0.0,1.0,0.24,0.2879,0.81,0.0]

Decision Tree feature vector length: 12

2 训练

2.1 帮助

  1. help(LinearRegressionWithSGD.train)
  2. help(DecisionTree.trainRegressor)

2.2 训练线性模型并测试预测效果

  1. linear_model = LinearRegressionWithSGD.train(data, iterations=10, step=0.1, intercept=False)
  2. true_vs_predicted = data.map(lambda p: (p.label, linear_model.predict(p.features)))
  3. print "Linear Model predictions: " + str(true_vs_predicted.take(5))

Linear Model predictions: [(16.0, 117.89250386724845), (40.0, 116.2249612319211), (32.0, 116.02369145779234), (13.0, 115.67088016754433), (1.0, 115.56315650834317)]

2.3 训练决策树模型并测试预测效果

  1. dt_model = DecisionTree.trainRegressor(data_dt, {})
  2. preds = dt_model.predict(data_dt.map(lambda p: p.features))
  3. actual = data.map(lambda p: p.label)
  4. true_vs_predicted_dt = actual.zip(preds)
  5. print "Decision Tree predictions: " + str(true_vs_predicted_dt.take(5))
  6. print "Decision Tree depth: " + str(dt_model.depth())
  7. print "Decision Tree number of nodes: " + str(dt_model.numNodes())

Decision Tree predictions: [(16.0, 54.913223140495866), (40.0, 54.913223140495866), (32.0, 53.171052631578945), (13.0, 14.284023668639053), (1.0, 14.284023668639053)]

Decision Tree depth: 5

Decision Tree number of nodes: 63

3 评估性能

评估回归模型的方法:

  • 均方误差(MSE, Mean Sequared Error)
  • 均方根误差(RMSE, Root Mean Squared Error)
  • 平均绝对误差(MAE, Mean Absolute Error)
  • R-平方系数(R-squared coefficient)
  • 均方根对数误差(RMSLE)

3.1 均方误差&均方根误差

  1. def squared_error(actual, pred):
  2. return (pred - actual)**2
  3. mse = true_vs_predicted.map(lambda (t, p): squared_error(t, p)).mean()
  4. mse_dt = true_vs_predicted_dt.map(lambda (t, p): squared_error(t, p)).mean()
  5. cat_features = dict([(i - 2, len(get_mapping(records, i)) + 1) for i in range(2,10)])
  6. # train the model again
  7. dt_model_2 = DecisionTree.trainRegressor(data_dt, categoricalFeaturesInfo=cat_features)
  8. preds_2 = dt_model_2.predict(data_dt.map(lambda p: p.features))
  9. actual_2 = data.map(lambda p: p.label)
  10. true_vs_predicted_dt_2 = actual_2.zip(preds_2)
  11. # compute performance metrics for decision tree model
  12. mse_dt_2 = true_vs_predicted_dt_2.map(lambda (t, p): squared_error(t, p)).mean()
  13. print "Linear Model - Mean Squared Error: %2.4f" % mse
  14. print "Decision Tree - Mean Squared Error: %2.4f" % mse_dt
  15. print "Categorical feature size mapping %s" % cat_features
  16. print "Decision Tree [Categorical feature]- Mean Squared Error: %2.4f" % mse_dt_2

Linear Model - Mean Squared Error: 30679.4539

Decision Tree - Mean Squared Error: 11560.7978

Decision Tree [Categorical feature]- Mean Squared Error: 7912.5642

3.2 平均绝对误差

  1. def abs_error(actual, pred):
  2. return np.abs(pred - actual)
  3. mae = true_vs_predicted.map(lambda (t, p): abs_error(t, p)).mean()
  4. mae_dt = true_vs_predicted_dt.map(lambda (t, p): abs_error(t, p)).mean()
  5. mae_dt_2 = true_vs_predicted_dt_2.map(lambda (t, p): abs_error(t, p)).mean()
  6. print "Linear Model - Mean Absolute Error: %2.4f" % mae
  7. print "Decision Tree - Mean Absolute Error: %2.4f" % mae_dt
  8. print "Decision Tree [Categorical feature]- Mean Absolute Error: %2.4f" % mae_dt_2

Linear Model - Mean Absolute Error: 130.6429

Decision Tree - Mean Absolute Error: 71.0969

Decision Tree [Categorical feature]- Mean Absolute Error: 59.4409

3.3 均方根对数误差

  1. def squared_log_error(pred, actual):
  2. return (np.log(pred + 1) - np.log(actual + 1))**2
  3. rmsle = np.sqrt(true_vs_predicted.map(lambda (t, p): squared_log_error(t, p)).mean())
  4. rmsle_dt = np.sqrt(true_vs_predicted_dt.map(lambda (t, p): squared_log_error(t, p)).mean())
  5. rmsle_dt_2 = np.sqrt(true_vs_predicted_dt_2.map(lambda (t, p): squared_log_error(t, p)).mean())
  6. print "Linear Model - Root Mean Squared Log Error: %2.4f" % rmsle
  7. print "Decision Tree - Root Mean Squared Log Error: %2.4f" % rmsle_dt
  8. print "Decision Tree [Categorical feature]- Root Mean Squared Log Error: %2.4f" % rmsle_dt_2

Linear Model - Root Mean Squared Log Error: 1.4653

Decision Tree - Root Mean Squared Log Error: 0.6259

Decision Tree [Categorical feature]- Root Mean Squared Log Error: 0.6192

4 改进和调优

  1. targets = records.map(lambda r: float(r[-1])).collect()
  2. hist(targets, bins=40, color='lightblue', normed=True)
  3. fig = matplotlib.pyplot.gcf()
  4. fig.set_size_inches(16, 10)

因为**不符合正态分布**,所以**对数变换**(用目标值的对数代替原始数值)或者平方根

4.1 对数变换

  1. log_targets = records.map(lambda r: np.log(float(r[-1]))).collect()
  2. hist(log_targets, bins=40, color='lightblue', normed=True)
  3. fig = matplotlib.pyplot.gcf()
  4. fig.set_size_inches(16, 10)

4.2 平方根变换

  1. sqrt_targets = records.map(lambda r: np.sqrt(float(r[-1]))).collect()
  2. hist(sqrt_targets, bins=40, color='lightblue', normed=True)
  3. fig = matplotlib.pyplot.gcf()
  4. fig.set_size_inches(16, 10)

4.3 对数变换的影响

  1. data_log = data.map(lambda lp: LabeledPoint(np.log(lp.label), lp.features))
  2. model_log = LinearRegressionWithSGD.train(data_log, iterations=10, step=0.1)
  3. true_vs_predicted_log = data_log.map(lambda p: (np.exp(p.label), np.exp(model_log.predict(p.features))))
  4. data_dt_log = data_dt.map(lambda lp: LabeledPoint(np.log(lp.label), lp.features))
  5. dt_model_log = DecisionTree.trainRegressor(data_dt_log, {})
  6. preds_log = dt_model_log.predict(data_dt_log.map(lambda p: p.features))
  7. actual_log = data_dt_log.map(lambda p: p.label)
  8. true_vs_predicted_dt_log = actual_log.zip(preds_log).map(lambda (t, p): (np.exp(t), np.exp(p)))
  9. mse_log = true_vs_predicted_log.map(lambda (t, p): squared_error(t, p)).mean()
  10. mae_log = true_vs_predicted_log.map(lambda (t, p): abs_error(t, p)).mean()
  11. rmsle_log = np.sqrt(true_vs_predicted_log.map(lambda (t, p): squared_log_error(t, p)).mean())
  12. mse_log_dt = true_vs_predicted_dt_log.map(lambda (t, p): squared_error(t, p)).mean()
  13. mae_log_dt = true_vs_predicted_dt_log.map(lambda (t, p): abs_error(t, p)).mean()
  14. rmsle_log_dt = np.sqrt(true_vs_predicted_dt_log.map(lambda (t, p): squared_log_error(t, p)).mean())
  15. print "Mean Squared Error: %2.4f" % mse_log
  16. print "Mean Absolute Error: %2.4f" % mae_log
  17. print "Root Mean Squared Log Error: %2.4f" % rmsle_log
  18. print "Non log-transformed predictions:\n" + str(true_vs_predicted.take(3))
  19. print "Log-transformed predictions:\n" + str(true_vs_predicted_log.take(3))
  20. print "Mean Squared Error: %2.4f" % mse_log_dt
  21. print "Mean Absolute Error: %2.4f" % mae_log_dt
  22. print "Root Mean Squared Log Error: %2.4f" % rmsle_log_dt
  23. print "Non log-transformed predictions:\n" + str(true_vs_predicted_dt.take(3))
  24. print "Log-transformed predictions:\n" + str(true_vs_predicted_dt_log.take(3))

Mean Squared Error: 50685.5559

Mean Absolute Error: 155.2955

Root Mean Squared Log Error: 1.5411

Non log-transformed predictions:
[(16.0, 117.89250386724845), (40.0, 116.2249612319211), (32.0, 116.02369145779234)]

Log-transformed predictions:
[(15.999999999999998, 28.080291845456237), (40.0, 26.959480191001784), (32.0, 26.654725629458031)]

Mean Squared Error: 14781.5760

Mean Absolute Error: 76.4131

Root Mean Squared Log Error: 0.6406

Non log-transformed predictions:
[(16.0, 54.913223140495866), (40.0, 54.913223140495866), (32.0, 53.171052631578945)]

Log-transformed predictions:
[(15.999999999999998, 37.530779787154522), (40.0, 37.530779787154522), (32.0, 7.2797070993907287)]

4.4 为交叉验证创建训练集和测试集

  1. data_with_idx = data.zipWithIndex().map(lambda (k, v): (v, k))
  2. test = data_with_idx.sample(False, 0.2, 42)
  3. train = data_with_idx.subtractByKey(test)
  4. train_data = train.map(lambda (idx, p): p)
  5. test_data = test.map(lambda (idx, p) : p)
  6. data_with_idx_dt = data_dt.zipWithIndex().map(lambda (k, v): (v, k))
  7. test_dt = data_with_idx_dt.sample(False, 0.2, 42)
  8. train_dt = data_with_idx_dt.subtractByKey(test_dt)
  9. train_data_dt = train_dt.map(lambda (idx, p): p)
  10. test_data_dt = test_dt.map(lambda (idx, p) : p)
  11. train_size = train_data.count()
  12. test_size = test_data.count()
  13. print "Training data size: %d" % train_size
  14. print "Test data size: %d" % test_size
  15. print "Total data size: %d " % num_data
  16. print "Train + Test size : %d" % (train_size + test_size)

Training data size: 13934

Test data size: 3445

Total data size: 17379

Train + Test size : 17379

4.5 线性模型调优

1 评估函数
  1. def evaluate(train, test, iterations, step, regParam, regType, intercept):
  2. model = LinearRegressionWithSGD.train(train, iterations, step, regParam=regParam, regType=regType, intercept=intercept)
  3. tp = test.map(lambda p: (p.label, model.predict(p.features)))
  4. rmsle = np.sqrt(tp.map(lambda (t, p): squared_log_error(t, p)).mean())
  5. return rmsle
2 迭代次数
  1. params = [1, 5, 10, 20, 50, 100]
  2. metrics = [evaluate(train_data, test_data, param, 0.01, 0.0, 'l2', False) for param in params]
  3. print params
  4. print metrics

[1, 5, 10, 20, 50, 100]

[2.8779465130028199, 2.0390187660391499, 1.7761565324837874, 1.5828778102209105, 1.4382263191764473, 1.4050638054019446]

  1. plot(params, metrics)
  2. fig = matplotlib.pyplot.gcf()
  3. pyplot.xscale('log')

迭代次数与RMSLE关系图

3 步长
  1. params = [0.01, 0.025, 0.05, 0.1, 1.0]
  2. metrics = [evaluate(train_data, test_data, 10, param, 0.0, 'l2', False) for param in params]
  3. print params
  4. print metrics

[0.01, 0.025, 0.05, 0.1, 1.0]

[1.7761565324837874, 1.4379348243997032, 1.4189071944747715, 1.5027293911925559, nan]

  1. plot(params, metrics)
  2. fig = matplotlib.pyplot.gcf()
  3. pyplot.xscale('log')

步长对预测结果的影响

4 L2正则化
  1. params = [0.0, 0.01, 0.1, 1.0, 5.0, 10.0, 20.0]
  2. metrics = [evaluate(train_data, test_data, 10, 0.1, param, 'l2', False) for param in params]
  3. print params
  4. print metrics
  5. plot(params, metrics)
  6. fig = matplotlib.pyplot.gcf()
  7. pyplot.xscale('log')

[0.0, 0.01, 0.1, 1.0, 5.0, 10.0, 20.0]

[1.5027293911925559, 1.5020646031965639, 1.4961903335175231, 1.4479313176192781, 1.4113329999970989, 1.5379824584440471, 1.8279564444985839]

5 L1正则化
  1. params = [0.0, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0]
  2. metrics = [evaluate(train_data, test_data, 10, 0.1, param, 'l1', False) for param in params]
  3. print params
  4. print metrics
  5. plot(params, metrics)
  6. fig = matplotlib.pyplot.gcf()
  7. pyplot.xscale('log')

[0.0, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0]

[1.5027293911925559, 1.5026938950690176, 1.5023761634555699, 1.499412856617814, 1.4713669769550108, 1.7596682962964318, 4.7551250073268614]

  1. model_l1 = LinearRegressionWithSGD.train(train_data, 10, 0.1, regParam=1.0, regType='l1', intercept=False)
  2. model_l1_10 = LinearRegressionWithSGD.train(train_data, 10, 0.1, regParam=10.0, regType='l1', intercept=False)
  3. model_l1_100 = LinearRegressionWithSGD.train(train_data, 10, 0.1, regParam=100.0, regType='l1', intercept=False)
  4. print "L1 (1.0) number of zero weights: " + str(sum(model_l1.weights.array == 0))
  5. print "L1 (10.0) number of zeros weights: " + str(sum(model_l1_10.weights.array == 0))
  6. print "L1 (100.0) number of zeros weights: " + str(sum(model_l1_100.weights.array == 0))

L1 (1.0) number of zero weights: 4
L1 (10.0) number of zeros weights: 33
L1 (100.0) number of zeros weights: 58

6 截距
  1. # Intercept
  2. params = [False, True]
  3. metrics = [evaluate(train_data, test_data, 10, 0.1, 1.0, 'l2', param) for param in params]
  4. print params
  5. print metrics
  6. bar(params, metrics, color='lightblue')
  7. fig = matplotlib.pyplot.gcf()

[False, True]

[1.4479313176192781, 1.4798261513419801]

4.6 决策树调优

1 评估函数
  1. def evaluate_dt(train, test, maxDepth, maxBins):
  2. model = DecisionTree.trainRegressor(train, {}, impurity='variance', maxDepth=maxDepth, maxBins=maxBins)
  3. preds = model.predict(test.map(lambda p: p.features))
  4. actual = test.map(lambda p: p.label)
  5. tp = actual.zip(preds)
  6. rmsle = np.sqrt(tp.map(lambda (t, p): squared_log_error(t, p)).mean())
  7. return rmsle
2 树深度
  1. params = [1, 2, 3, 4, 5, 10, 20]
  2. metrics = [evaluate_dt(train_data_dt, test_data_dt, param, 32) for param in params]
  3. print params
  4. print metrics
  5. plot(params, metrics)
  6. fig = matplotlib.pyplot.gcf()

[1, 2, 3, 4, 5, 10, 20]

[1.0280339660196287, 0.92686672078778276, 0.81807794023407532, 0.74060228537329209, 0.63583503599563096, 0.4276659008415965, 0.45481197001756291]

3 最大划分数
  1. params = [2, 4, 8, 16, 32, 64, 100]
  2. metrics = [evaluate_dt(train_data_dt, test_data_dt, 5, param) for param in params]
  3. print params
  4. print metrics
  5. plot(params, metrics)
  6. fig = matplotlib.pyplot.gcf()

[2, 4, 8, 16, 32, 64, 100]

[1.3076555360778914, 0.81721457107308615, 0.75651792347650992, 0.63786761731722474, 0.63583503599563096, 0.63583503599563096, 0.63583503599563096]

Spark机器学习5·回归模型(pyspark)的更多相关文章

  1. Spark机器学习4·分类模型(spark-shell)

    线性模型 逻辑回归--逻辑损失(logistic loss) 线性支持向量机(Support Vector Machine, SVM)--合页损失(hinge loss) 朴素贝叶斯(Naive Ba ...

  2. Spark机器学习7·降维模型(scala&python)

    PCA(主成分分析法,Principal Components Analysis) SVD(奇异值分解法,Singular Value Decomposition) http://vis-www.cs ...

  3. Spark机器学习6·聚类模型(spark-shell)

    K-均值(K-mean)聚类 目的:最小化所有类簇中的方差之和 类簇内方差和(WCSS,within cluster sum of squared errors) fuzzy K-means 层次聚类 ...

  4. Spark机器学习2·准备数据(pyspark)

    准备环境 anaconda nano ~/.zshrc export PATH=$PATH:/anaconda/bin source ~/.zshrc echo $HOME echo $PATH ip ...

  5. Spark 机器学习------逻辑回归

    package Spark_MLlib import javassist.bytecode.SignatureAttribute.ArrayType import org.apache.spark.s ...

  6. 【Spark机器学习速成宝典】模型篇08保序回归【Isotonic Regression】(Python版)

    目录 保序回归原理 保序回归代码(Spark Python) 保序回归原理 待续... 返回目录 保序回归代码(Spark Python) 代码里数据:https://pan.baidu.com/s/ ...

  7. 【Spark机器学习速成宝典】模型篇02逻辑斯谛回归【Logistic回归】(Python版)

    目录 Logistic回归原理 Logistic回归代码(Spark Python) Logistic回归原理 详见博文:http://www.cnblogs.com/itmorn/p/7890468 ...

  8. 客户流失?来看看大厂如何基于spark+机器学习构建千万数据规模上的用户留存模型 ⛵

    作者:韩信子@ShowMeAI 大数据技术 ◉ 技能提升系列:https://www.showmeai.tech/tutorials/84 行业名企应用系列:https://www.showmeai. ...

  9. Spark 决策树--回归模型

    package Spark_MLlib import org.apache.spark.ml.Pipeline import org.apache.spark.ml.evaluation.Regres ...

随机推荐

  1. openwrt U盘启动

    参考链接: http://m.blog.csdn.net/blog/zcynical/44892785

  2. 蓝桥杯 第四届C/C++预赛真题(6) 三部排序(水题)

    标题:三部排序 一般的排序有许多经典算法,如快速排序.希尔排序等. 但实际应用时,经常会或多或少有一些特殊的要求.我们没必要套用那些经典算法,可以根据实际情况建立更好的解法. 比如,对一个整型数组中的 ...

  3. web.xml配置文件详解

    笔者从大学毕业一直从事网上银行的开发,都是一些web开发项目.接下来会写一些关于web开发相关的东西,也是自己工作以来经常用到的内容.本篇先从web.xml文件开始介绍,笔者接触到的项目中都有这个文件 ...

  4. python中的 try...except...finally 的用法

    python中的 try...except...finally 的用法 author:headsen chen date:2018-04-09  16:22:11 try, except, final ...

  5. work_log

    机房搬迁 1. 虚拟机-à实体机 2. ldap 服务器 3. 考勤数据服务器 4. glpi 权限管理. 5. 备份脚本. 6. 试验jira重新启动. Luke--- 1,报价文档,相关技术者,技 ...

  6. [荐][转]为何应该使用 MacOS X(论GUI环境下开发人员对软件的配置与重用)

    一周前我和 Tinyfool 闲聊苹果操作系统,都认为对于开发人员来说,苹果操作系统(MacOS)是上佳的选择.Tinyfool 笔头很快,当即就写了一篇长文章,我则笔头很慢,今天才全部码好.他的文章 ...

  7. reload函数

    reload函数 python2中reload()是内置函数,可以直接调用: reload() python3中将reload()函数放到了imp包中,需要先引入imp包: from imp impo ...

  8. Json反序列化Map的key不能是Object

    使用json作为数据传输格式,碰到一个问题.我希望传输的是一个Map<Target, TargetInfo>其中Target是一个对象,作为map的一个key public class T ...

  9. 报错:SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape

    Outline SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: trunc ...

  10. 假设做一个精美的Login界面(攻克了一EditText自带clear的功能,相似iphone的UITextField)

    先上图:     XML为: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" ...