(1) How to comput the Cost function in Univirate/Multivariate Linear Regression;

(2) How to comput the Batch Gradient Descent function in Univirate/Multivariate Linear Regression;

(3) How to scale features by mean value and standard deviation;

(4) How to calculate Theta by normal equaltion;

Data1

  1. 6.1101,17.592
  2. 5.5277,9.1302
  3. 8.5186,13.662
  4. 7.0032,11.854
  5. 5.8598,6.8233
  6. 8.3829,11.886
  7. 7.4764,4.3483
  8. 8.5781,12
  9. 6.4862,6.5987
  10. 5.0546,3.8166
  11. 5.7107,3.2522
  12. 14.164,15.505
  13. 5.734,3.1551
  14. 8.4084,7.2258
  15. 5.6407,0.71618
  16. 5.3794,3.5129
  17. 6.3654,5.3048
  18. 5.1301,0.56077
  19. 6.4296,3.6518
  20. 7.0708,5.3893
  21. 6.1891,3.1386
  22. 20.27,21.767
  23. 5.4901,4.263
  24. 6.3261,5.1875
  25. 5.5649,3.0825
  26. 18.945,22.638
  27. 12.828,13.501
  28. 10.957,7.0467
  29. 13.176,14.692
  30. 22.203,24.147
  31. 5.2524,-1.22
  32. 6.5894,5.9966
  33. 9.2482,12.134
  34. 5.8918,1.8495
  35. 8.2111,6.5426
  36. 7.9334,4.5623
  37. 8.0959,4.1164
  38. 5.6063,3.3928
  39. 12.836,10.117
  40. 6.3534,5.4974
  41. 5.4069,0.55657
  42. 6.8825,3.9115
  43. 11.708,5.3854
  44. 5.7737,2.4406
  45. 7.8247,6.7318
  46. 7.0931,1.0463
  47. 5.0702,5.1337
  48. 5.8014,1.844
  49. 11.7,8.0043
  50. 5.5416,1.0179
  51. 7.5402,6.7504
  52. 5.3077,1.8396
  53. 7.4239,4.2885
  54. 7.6031,4.9981
  55. 6.3328,1.4233
  56. 6.3589,-1.4211
  57. 6.2742,2.4756
  58. 5.6397,4.6042
  59. 9.3102,3.9624
  60. 9.4536,5.4141
  61. 8.8254,5.1694
  62. 5.1793,-0.74279
  63. 21.279,17.929
  64. 14.908,12.054
  65. 18.959,17.054
  66. 7.2182,4.8852
  67. 8.2951,5.7442
  68. 10.236,7.7754
  69. 5.4994,1.0173
  70. 20.341,20.992
  71. 10.136,6.6799
  72. 7.3345,4.0259
  73. 6.0062,1.2784
  74. 7.2259,3.3411
  75. 5.0269,-2.6807
  76. 6.5479,0.29678
  77. 7.5386,3.8845
  78. 5.0365,5.7014
  79. 10.274,6.7526
  80. 5.1077,2.0576
  81. 5.7292,0.47953
  82. 5.1884,0.20421
  83. 6.3557,0.67861
  84. 9.7687,7.5435
  85. 6.5159,5.3436
  86. 8.5172,4.2415
  87. 9.1802,6.7981
  88. 6.002,0.92695
  89. 5.5204,0.152
  90. 5.0594,2.8214
  91. 5.7077,1.8451
  92. 7.6366,4.2959
  93. 5.8707,7.2029
  94. 5.3054,1.9869
  95. 8.2934,0.14454
  96. 13.394,9.0551
  97. 5.4369,0.61705

1. ex1.m

  1. %% Machine Learning Online Class - Exercise 1: Linear Regression
  2.  
  3. % Instructions
  4. % ------------
  5. %
  6. % This file contains code that helps you get started on the
  7. % linear exercise. You will need to complete the following functions
  8. % in this exericse:
  9. %
  10. % warmUpExercise.m
  11. % plotData.m
  12. % gradientDescent.m
  13. % computeCost.m
  14. % gradientDescentMulti.m
  15. % computeCostMulti.m
  16. % featureNormalize.m
  17. % normalEqn.m
  18. %
  19. % For this exercise, you will not need to change any code in this file,
  20. % or any other files other than those mentioned above.
  21. %
  22. % x refers to the population size in 10,000s
  23. % y refers to the profit in $10,000s
  24. %
  25.  
  26. %% Initialization
  27. clear ; close all; clc
  28.  
  29. %% ==================== Part 1: Basic Function ====================
  30. % Complete warmUpExercise.m
  31. fprintf('Running warmUpExercise ... \n');
  32. fprintf('5x5 Identity Matrix: \n');
  33. warmUpExercise()
  34.  
  35. fprintf('Program paused. Press enter to continue.\n');
  36. pause;
  37.  
  38. %% ======================= Part 2: Plotting =======================
  39. fprintf('Plotting Data ...\n')
  40. data = load('ex1data1.txt');
  41. X = data(:, 1); y = data(:, 2);
  42. m = length(y); % number of training examples
  43.  
  44. % Plot Data
  45. % Note: You have to complete the code in plotData.m
  46. plotData(X, y);
  47.  
  48. fprintf('Program paused. Press enter to continue.\n');
  49. pause;
  50.  
  51. %% =================== Part 3: Gradient descent ===================
  52. fprintf('Running Gradient Descent ...\n')
  53.  
  54. X = [ones(m, 1), data(:,1)]; % Add a column of ones to x
  55. theta = zeros(2, 1); % initialize fitting parameters
  56.  
  57. % Some gradient descent settings
  58. iterations = 1500;
  59. alpha = 0.01;
  60.  
  61. % compute and display initial cost
  62. computeCost(X, y, theta)
  63.  
  64. % run gradient descent
  65. theta = gradientDescent(X, y, theta, alpha, iterations);
  66.  
  67. % print theta to screen
  68. fprintf('Theta found by gradient descent: ');
  69. fprintf('%f %f \n', theta(1), theta(2));
  70.  
  71. % Plot the linear fit
  72. hold on; % keep previous plot visible
  73. plot(X(:,2), X*theta, '-')
  74. legend('Training data', 'Linear regression')
  75. hold off % don't overlay any more plots on this figure
  76.  
  77. % Predict values for population sizes of 35,000 and 70,000
  78. predict1 = [1, 3.5] *theta;
  79. fprintf('For population = 35,000, we predict a profit of %f\n',...
  80. predict1*10000);
  81. predict2 = [1, 7] * theta;
  82. fprintf('For population = 70,000, we predict a profit of %f\n',...
  83. predict2*10000);
  84.  
  85. fprintf('Program paused. Press enter to continue.\n');
  86. pause;
  87.  
  88. %% ============= Part 4: Visualizing J(theta_0, theta_1) =============
  89. fprintf('Visualizing J(theta_0, theta_1) ...\n')
  90.  
  91. % Grid over which we will calculate J
  92. theta0_vals = linspace(-10, 10, 100);
  93. theta1_vals = linspace(-1, 4, 100);
  94.  
  95. % initialize J_vals to a matrix of 0's
  96. J_vals = zeros(length(theta0_vals), length(theta1_vals));
  97.  
  98. % Fill out J_vals
  99. for i = 1:length(theta0_vals)
  100. for j = 1:length(theta1_vals)
  101. t = [theta0_vals(i); theta1_vals(j)];
  102. J_vals(i,j) = computeCost(X, y, t);
  103. end
  104. end
  105.  
  106. % Because of the way meshgrids work in the surf command, we need to
  107. % transpose J_vals before calling surf, or else the axes will be flipped
  108. J_vals = J_vals';
  109. % Surface plot
  110. figure;
  111. surf(theta0_vals, theta1_vals, J_vals)
  112. xlabel('\theta_0'); ylabel('\theta_1');
  113.  
  114. % Contour plot
  115. figure;
  116. % Plot J_vals as 15 contours spaced logarithmically between 0.01 and 100
  117. contour(theta0_vals, theta1_vals, J_vals, logspace(-2, 3, 20))
  118. xlabel('\theta_0'); ylabel('\theta_1');
  119. hold on;
  120. plot(theta(1), theta(2), 'rx', 'MarkerSize', 10, 'LineWidth', 2);

2.warmUpExercise.m

  1. function A = warmUpExercise()
  2. %WARMUPEXERCISE Example function in octave
  3. % A = WARMUPEXERCISE() is an example function that returns the 5x5 identity matrix
  4.  
  5. A = [];
  6. % ============= YOUR CODE HERE ==============
  7. % Instructions: Return the 5x5 identity matrix
  8. % In octave, we return values by defining which variables
  9. % represent the return values (at the top of the file)
  10. % and then set them accordingly.
  11. A = eye(5);
  12.  
  13. % ===========================================
  14.  
  15. end

3. computCost.m

  1. function J = computeCost(X, y, theta)
  2. %COMPUTECOST Compute cost for linear regression
  3. % J = COMPUTECOST(X, y, theta) computes the cost of using theta as the
  4. % parameter for linear regression to fit the data points in X and y
  5.  
  6. % Initialize some useful values
  7. m = length(y); % number of training examples
  8.  
  9. % You need to return the following variables correctly
  10. J = 0;
  11.  
  12. % ====================== YOUR CODE HERE ======================
  13. % Instructions: Compute the cost of a particular choice of theta
  14. % You should set J to the cost.
  15. hypothesis = X*theta;
  16. J = 1/(2*m)*(sum((hypothesis-y).^2));
  17.  
  18. % =========================================================================
  19.  
  20. end

4.gradientDescent.m

  1. function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
  2. %GRADIENTDESCENT Performs gradient descent to learn theta
  3. % theta = GRADIENTDESENT(X, y, theta, alpha, num_iters) updates theta by
  4. % taking num_iters gradient steps with learning rate alpha
  5.  
  6. % Initialize some useful values
  7. m = length(y); % number of training examples
  8. J_history = zeros(num_iters, 1);
  9.  
  10. for iter = 1:num_iters
  11.  
  12. % ====================== YOUR CODE HERE ======================
  13. % Instructions: Perform a single gradient step on the parameter vector
  14. % theta.
  15. %
  16. % Hint: While debugging, it can be useful to print out the values
  17. % of the cost function (computeCost) and gradient here.
  18. %
  19. hypothesis = X*theta;
  20. delta = X'*(hypothesis-y);
  21. theta = theta - alpha/m*delta;
  22.  
  23. % ============================================================
  24.  
  25. % Save the cost J in every iteration
  26. J_history(iter) = computeCost(X, y, theta);
  27.  
  28. end
  29.  
  30. end

Data2

  1. 2104,3,399900
  2. 1600,3,329900
  3. 2400,3,369000
  4. 1416,2,232000
  5. 3000,4,539900
  6. 1985,4,299900
  7. 1534,3,314900
  8. 1427,3,198999
  9. 1380,3,212000
  10. 1494,3,242500
  11. 1940,4,239999
  12. 2000,3,347000
  13. 1890,3,329999
  14. 4478,5,699900
  15. 1268,3,259900
  16. 2300,4,449900
  17. 1320,2,299900
  18. 1236,3,199900
  19. 2609,4,499998
  20. 3031,4,599000
  21. 1767,3,252900
  22. 1888,2,255000
  23. 1604,3,242900
  24. 1962,4,259900
  25. 3890,3,573900
  26. 1100,3,249900
  27. 1458,3,464500
  28. 2526,3,469000
  29. 2200,3,475000
  30. 2637,3,299900
  31. 1839,2,349900
  32. 1000,1,169900
  33. 2040,4,314900
  34. 3137,3,579900
  35. 1811,4,285900
  36. 1437,3,249900
  37. 1239,3,229900
  38. 2132,4,345000
  39. 4215,4,549000
  40. 2162,4,287000
  41. 1664,2,368500
  42. 2238,3,329900
  43. 2567,4,314000
  44. 1200,3,299000
  45. 852,2,179900
  46. 1852,4,299900
  47. 1203,3,239500

0.ex1_multi.m

  1. %% Machine Learning Online Class
  2. % Exercise 1: Linear regression with multiple variables
  3. %
  4. % Instructions
  5. % ------------
  6. %
  7. % This file contains code that helps you get started on the
  8. % linear regression exercise.
  9. %
  10. % You will need to complete the following functions in this
  11. % exericse:
  12. %
  13. % warmUpExercise.m
  14. % plotData.m
  15. % gradientDescent.m
  16. % computeCost.m
  17. % gradientDescentMulti.m
  18. % computeCostMulti.m
  19. % featureNormalize.m
  20. % normalEqn.m
  21. %
  22. % For this part of the exercise, you will need to change some
  23. % parts of the code below for various experiments (e.g., changing
  24. % learning rates).
  25. %
  26.  
  27. %% Initialization
  28.  
  29. %% ================ Part 1: Feature Normalization ================
  30.  
  31. %% Clear and Close Figures
  32. clear ; close all; clc
  33.  
  34. fprintf('Loading data ...\n');
  35.  
  36. %% Load Data
  37. data = load('ex1data2.txt');
  38. X = data(:, 1:2);
  39. y = data(:, 3);
  40. m = length(y);
  41.  
  42. % Print out some data points
  43. fprintf('First 10 examples from the dataset: \n');
  44. fprintf(' x = [%.0f %.0f], y = %.0f \n', [X(1:10,:) y(1:10,:)]');
  45.  
  46. fprintf('Program paused. Press enter to continue.\n');
  47. pause;
  48.  
  49. % Scale features and set them to zero mean
  50. fprintf('Normalizing Features ...\n');
  51.  
  52. [X mu sigma] = featureNormalize(X);
  53.  
  54. % Add intercept term to X
  55. X = [ones(m, 1) X];
  56.  
  57. %% ================ Part 2: Gradient Descent ================
  58.  
  59. % ====================== YOUR CODE HERE ======================
  60. % Instructions: We have provided you with the following starter
  61. % code that runs gradient descent with a particular
  62. % learning rate (alpha).
  63. %
  64. % Your task is to first make sure that your functions -
  65. % computeCost and gradientDescent already work with
  66. % this starter code and support multiple variables.
  67. %
  68. % After that, try running gradient descent with
  69. % different values of alpha and see which one gives
  70. % you the best result.
  71. %
  72. % Finally, you should complete the code at the end
  73. % to predict the price of a 1650 sq-ft, 3 br house.
  74. %
  75. % Hint: By using the 'hold on' command, you can plot multiple
  76. % graphs on the same figure.
  77. %
  78. % Hint: At prediction, make sure you do the same feature normalization.
  79. %
  80.  
  81. fprintf('Running gradient descent ...\n');
  82.  
  83. % Choose some alpha value
  84. alpha = 0.01;
  85. num_iters = 400;
  86.  
  87. % Init Theta and Run Gradient Descent
  88. theta = zeros(3, 1);
  89. [theta, J_history] = gradientDescentMulti(X, y, theta, alpha, num_iters);
  90.  
  91. % Plot the convergence graph
  92. figure;
  93. plot(1:numel(J_history), J_history, '-b', 'LineWidth', 2);
  94. xlabel('Number of iterations');
  95. ylabel('Cost J');
  96.  
  97. % Display gradient descent's result
  98. fprintf('Theta computed from gradient descent: \n');
  99. fprintf(' %f \n', theta);
  100. fprintf('\n');
  101.  
  102. % Estimate the price of a 1650 sq-ft, 3 br house
  103. % ====================== YOUR CODE HERE ======================
  104. % Recall that the first column of X is all-ones. Thus, it does
  105. % not need to be normalized.
  106. price = 0; % You should change this
  107.  
  108. % ============================================================
  109.  
  110. fprintf(['Predicted price of a 1650 sq-ft, 3 br house ' ...
  111. '(using gradient descent):\n $%f\n'], price);
  112.  
  113. fprintf('Program paused. Press enter to continue.\n');
  114. pause;
  115.  
  116. %% ================ Part 3: Normal Equations ================
  117.  
  118. fprintf('Solving with normal equations...\n');
  119.  
  120. % ====================== YOUR CODE HERE ======================
  121. % Instructions: The following code computes the closed form
  122. % solution for linear regression using the normal
  123. % equations. You should complete the code in
  124. % normalEqn.m
  125. %
  126. % After doing so, you should complete this code
  127. % to predict the price of a 1650 sq-ft, 3 br house.
  128. %
  129.  
  130. %% Load Data
  131. data = csvread('ex1data2.txt');
  132. X = data(:, 1:2);
  133. y = data(:, 3);
  134. m = length(y);
  135.  
  136. % Add intercept term to X
  137. X = [ones(m, 1) X];
  138.  
  139. % Calculate the parameters from the normal equation
  140. theta = normalEqn(X, y);
  141.  
  142. % Display normal equation's result
  143. fprintf('Theta computed from the normal equations: \n');
  144. fprintf(' %f \n', theta);
  145. fprintf('\n');
  146.  
  147. % Estimate the price of a 1650 sq-ft, 3 br house
  148. % ====================== YOUR CODE HERE ======================
  149. price = 0; % You should change this
  150.  
  151. % ============================================================
  152.  
  153. fprintf(['Predicted price of a 1650 sq-ft, 3 br house ' ...
  154. '(using normal equations):\n $%f\n'], price);

1.featureNormalize.m

  1. function [X_norm, mu, sigma] = featureNormalize(X)
  2. %FEATURENORMALIZE Normalizes the features in X
  3. % FEATURENORMALIZE(X) returns a normalized version of X where
  4. % the mean value of each feature is 0 and the standard deviation
  5. % is 1. This is often a good preprocessing step to do when
  6. % working with learning algorithms.
  7.  
  8. % You need to set these values correctly
  9. X_norm = X;
  10. mu = zeros(1, size(X, 2));
  11. sigma = zeros(1, size(X, 2));
  12.  
  13. % ====================== YOUR CODE HERE ======================
  14. % Instructions: First, for each feature dimension, compute the mean
  15. % of the feature and subtract it from the dataset,
  16. % storing the mean value in mu. Next, compute the
  17. % standard deviation of each feature and divide
  18. % each feature by it's standard deviation, storing
  19. % the standard deviation in sigma.
  20. %
  21. % Note that X is a matrix where each column is a
  22. % feature and each row is an example. You need
  23. % to perform the normalization separately for
  24. % each feature.
  25. %
  26. % Hint: You might find the 'mean' and 'std' functions useful.
  27. %
  28. mu = mean(X);
  29. sigma = std(X);
  30. X_norm = (X_norm.-mu)./sigma;
  31.  
  32. % ============================================================
  33.  
  34. end

2.computCostMulti.m

  1. function J = computeCostMulti(X, y, theta)
  2. %COMPUTECOSTMULTI Compute cost for linear regression with multiple variables
  3. % J = COMPUTECOSTMULTI(X, y, theta) computes the cost of using theta as the
  4. % parameter for linear regression to fit the data points in X and y
  5.  
  6. % Initialize some useful values
  7. m = length(y); % number of training examples
  8.  
  9. % You need to return the following variables correctly
  10. J = 0;
  11.  
  12. % ====================== YOUR CODE HERE ======================
  13. % Instructions: Compute the cost of a particular choice of theta
  14. % You should set J to the cost.
  15. hypothesis = X*theta;
  16. J = 1/(2*m)*(sum((hypothesis-y).^2));
  17.  
  18. % =========================================================================
  19.  
  20. end

3.gradientDescentMulti.m

  1. function [theta, J_history] = gradientDescentMulti(X, y, theta, alpha, num_iters)
  2. %GRADIENTDESCENTMULTI Performs gradient descent to learn theta
  3. % theta = GRADIENTDESCENTMULTI(x, y, theta, alpha, num_iters) updates theta by
  4. % taking num_iters gradient steps with learning rate alpha
  5.  
  6. % Initialize some useful values
  7. m = length(y); % number of training examples
  8. J_history = zeros(num_iters, 1);
  9.  
  10. for iter = 1:num_iters
  11.  
  12. % ====================== YOUR CODE HERE ======================
  13. % Instructions: Perform a single gradient step on the parameter vector
  14. % theta.
  15. %
  16. % Hint: While debugging, it can be useful to print out the values
  17. % of the cost function (computeCostMulti) and gradient here.
  18. %
  19. hypothesis = X*theta;
  20. delta = X'*(hypothesis-y);
  21. theta = theta - alpha/m*delta;
  22.  
  23. % ============================================================
  24.  
  25. % Save the cost J in every iteration
  26. J_history(iter) = computeCostMulti(X, y, theta);
  27.  
  28. end
  29.  
  30. end

4.normalEqn.m

  1. function [theta] = normalEqn(X, y)
  2. %NORMALEQN Computes the closed-form solution to linear regression
  3. % NORMALEQN(X,y) computes the closed-form solution to linear
  4. % regression using the normal equations.
  5.  
  6. theta = zeros(size(X, 2), 1);
  7.  
  8. % ====================== YOUR CODE HERE ======================
  9. % Instructions: Complete the code to compute the closed form solution
  10. % to linear regression and put the result in theta.
  11. %
  12.  
  13. % ---------------------- Sample Solution ----------------------
  14.  
  15. theta = pinv(X'*X)*X'*y;
  16.  
  17. % -------------------------------------------------------------
  18.  
  19. % ============================================================
  20.  
  21. end

CheeseZH: Stanford University: Machine Learning Ex1:Linear Regression的更多相关文章

  1. CheeseZH: Stanford University: Machine Learning Ex2:Logistic Regression

    1. Sigmoid Function In Logisttic Regression, the hypothesis is defined as: where function g is the s ...

  2. CheeseZH: Stanford University: Machine Learning Ex5:Regularized Linear Regression and Bias v.s. Variance

    源码:https://github.com/cheesezhe/Coursera-Machine-Learning-Exercise/tree/master/ex5 Introduction: In ...

  3. CheeseZH: Stanford University: Machine Learning Ex3: Multiclass Logistic Regression and Neural Network Prediction

    Handwritten digits recognition (0-9) Multi-class Logistic Regression 1. Vectorizing Logistic Regress ...

  4. CheeseZH: Stanford University: Machine Learning Ex4:Training Neural Network(Backpropagation Algorithm)

    1. Feedforward and cost function; 2.Regularized cost function: 3.Sigmoid gradient The gradient for t ...

  5. Machine Learning #Lab1# Linear Regression

    Machine Learning Lab1 打算把Andrew Ng教授的#Machine Learning#相关的6个实验一一实现了贴出来- 预计时间长度战线会拉的比較长(毕竟JOS的7级浮屠还没搞 ...

  6. 【Coursera - machine learning】 Linear regression with one variable-quiz

    Question 1 Consider the problem of predicting how well a student does in her second year of college/ ...

  7. machine learning (2)-linear regression with one variable

    machine learning- linear regression with one variable(2) Linear regression with one variable = univa ...

  8. ON THE EVOLUTION OF MACHINE LEARNING: FROM LINEAR MODELS TO NEURAL NETWORKS

    ON THE EVOLUTION OF MACHINE LEARNING: FROM LINEAR MODELS TO NEURAL NETWORKS We recently interviewed ...

  9. Stanford CS229 Machine Learning by Andrew Ng

    CS229 Machine Learning Stanford Course by Andrew Ng Course material, problem set Matlab code written ...

随机推荐

  1. 微信小程序 scroll-view隐藏横向滚动条

    ::-webkit-scrollbar { width: 0; height: 0; color: transparent; }

  2. CentOS 7 下编译安装lnmp之MySQL篇详解

    一.安装环境 宿主机=> win7,虚拟机 centos => 系统版本:centos-release-7-5.1804.el7.centos.x86_64 二.MySQL下载 MySQL ...

  3. KVM资源划分分配技巧

    kvm有个叫做超分的概念,根据这个特性可以分配出超出物理机配置的数台虚拟机. 以下是自己总结的一些划分技巧: 一.最保守方法(性能最好) 根据物理机的资源,按虚拟机的数量叠加但不超过物理机的总和.不超 ...

  4. Linux下生成随机密码(转)

    1.使用SHA算法来加密日期,并输出结果的前32个字符: date +%s |sha256sum |base64 |head -c 32 ;echo 生成结果如下: ZTNiMGM0NDI5OGZjM ...

  5. 实效云计算用户组(ECUG) 与 阿里云

    http://www.ecug.org/ http://www.aliyun.com/   阿里云

  6. ajax访问遇到Session失效问题

    最近由于一个项目,模块切换为ajax请求数据,当Session失效后,ajax请 求后没有返回值,只有响应的html:<html><script type='text/javascr ...

  7. 为免费app嵌入Admob广告

    为免费app嵌入Admob广告,进而获得广告收入. 1.http://www.admob.com/注册一个帐号, 添加Add Mobile Site/app,输入相关信息后,提交完成, 下载Andro ...

  8. Windows 7 卸载 IE10

    今天微软为Windows 7发布了IE10预览版,你是否已经安装?根据笔者的体验,IE10确实如微软所说,在速度.性能等各方面都有了明显提升. 不过,IE10发布预览版安装后会直接替代IE9,如果你想 ...

  9. Oracle 快速插入1000万条数据的实现方式

    1.使用dual配合connect by level create table BigTable as select rownum as id from dual connect by level & ...

  10. ztree使用系列三(ztree与springmvc+spring+mybatis整合实现增删改查)

    在springmvc+spring+mybatis里整合ztree实现增删改查,上一篇已经写了demo,以下就仅仅贴出各层实现功能的代码: Jsp页面实现功能的js代码例如以下: <script ...