10 Minutes to pandas

This is a short introduction to pandas, geared mainly for new users. You can see more complex recipes in the Cookbook

Customarily, we import as follows:

  1. In [1]: import pandas as pd
  2.  
  3. In [2]: import numpy as np
  4.  
  5. In [3]: import matplotlib.pyplot as plt

Object Creation

See the Data Structure Intro section

Creating a Series by passing a list of values, letting pandas create a default integer index:

  1. In [4]: s = pd.Series([1,3,5,np.nan,6,8])
  2.  
  3. In [5]: s
  4. Out[5]:
  5. 0 1.0
  6. 1 3.0
  7. 2 5.0
  8. 3 NaN
  9. 4 6.0
  10. 5 8.0
  11. dtype: float64

Creating a DataFrame by passing a numpy array, with a datetime index and labeled columns:

  1. In [6]: dates = pd.date_range('20130101', periods=6)
  2.  
  3. In [7]: dates
  4. Out[7]:
  5. DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
  6. '2013-01-05', '2013-01-06'],
  7. dtype='datetime64[ns]', freq='D')
  8.  
  9. In [8]: df = pd.DataFrame(np.random.randn(6,4), index=dates, columns=list('ABCD'))
  10.  
  11. In [9]: df
  12. Out[9]:
  13. A B C D
  14. 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
  15. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  16. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
  17. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860
  18. 2013-01-05 -0.424972 0.567020 0.276232 -1.087401
  19. 2013-01-06 -0.673690 0.113648 -1.478427 0.524988

Creating a DataFrame by passing a dict of objects that can be converted to series-like.

  1. In [10]: df2 = pd.DataFrame({ 'A' : 1.,
  2. ....: 'B' : pd.Timestamp('20130102'),
  3. ....: 'C' : pd.Series(1,index=list(range(4)),dtype='float32'),
  4. ....: 'D' : np.array([3] * 4,dtype='int32'),
  5. ....: 'E' : pd.Categorical(["test","train","test","train"]),
  6. ....: 'F' : 'foo' })
  7. ....:
  8.  
  9. In [11]: df2
  10. Out[11]:
  11. A B C D E F
  12. 0 1.0 2013-01-02 1.0 3 test foo
  13. 1 1.0 2013-01-02 1.0 3 train foo
  14. 2 1.0 2013-01-02 1.0 3 test foo
  15. 3 1.0 2013-01-02 1.0 3 train foo

Having specific dtypes

  1. In [12]: df2.dtypes
  2. Out[12]:
  3. A float64
  4. B datetime64[ns]
  5. C float32
  6. D int32
  7. E category
  8. F object
  9. dtype: object

If you’re using IPython, tab completion for column names (as well as public attributes) is automatically enabled. Here’s a subset of the attributes that will be completed:

  1. In [13]: df2.<TAB>
  2. df2.A df2.bool
  3. df2.abs df2.boxplot
  4. df2.add df2.C
  5. df2.add_prefix df2.clip
  6. df2.add_suffix df2.clip_lower
  7. df2.align df2.clip_upper
  8. df2.all df2.columns
  9. df2.any df2.combine
  10. df2.append df2.combine_first
  11. df2.apply df2.compound
  12. df2.applymap df2.consolidate
  13. df2.as_blocks df2.convert_objects
  14. df2.asfreq df2.copy
  15. df2.as_matrix df2.corr
  16. df2.astype df2.corrwith
  17. df2.at df2.count
  18. df2.at_time df2.cov
  19. df2.axes df2.cummax
  20. df2.B df2.cummin
  21. df2.between_time df2.cumprod
  22. df2.bfill df2.cumsum
  23. df2.blocks df2.D

As you can see, the columns A, B, C, and D are automatically tab completed. E is there as well; the rest of the attributes have been truncated for brevity.

Viewing Data

See the Basics section

See the top & bottom rows of the frame

  1. In [14]: df.head()
  2. Out[14]:
  3. A B C D
  4. 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
  5. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  6. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
  7. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860
  8. 2013-01-05 -0.424972 0.567020 0.276232 -1.087401
  9.  
  10. In [15]: df.tail(3)
  11. Out[15]:
  12. A B C D
  13. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860
  14. 2013-01-05 -0.424972 0.567020 0.276232 -1.087401
  15. 2013-01-06 -0.673690 0.113648 -1.478427 0.524988

Display the index, columns, and the underlying numpy data

  1. In [16]: df.index
  2. Out[16]:
  3. DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
  4. '2013-01-05', '2013-01-06'],
  5. dtype='datetime64[ns]', freq='D')
  6.  
  7. In [17]: df.columns
  8. Out[17]: Index(['A', 'B', 'C', 'D'], dtype='object')
  9.  
  10. In [18]: df.values
  11. Out[18]:
  12. array([[ 0.4691, -0.2829, -1.5091, -1.1356],
  13. [ 1.2121, -0.1732, 0.1192, -1.0442],
  14. [-0.8618, -2.1046, -0.4949, 1.0718],
  15. [ 0.7216, -0.7068, -1.0396, 0.2719],
  16. [-0.425 , 0.567 , 0.2762, -1.0874],
  17. [-0.6737, 0.1136, -1.4784, 0.525 ]])

Describe shows a quick statistic summary of your data

  1. In [19]: df.describe()
  2. Out[19]:
  3. A B C D
  4. count 6.000000 6.000000 6.000000 6.000000
  5. mean 0.073711 -0.431125 -0.687758 -0.233103
  6. std 0.843157 0.922818 0.779887 0.973118
  7. min -0.861849 -2.104569 -1.509059 -1.135632
  8. 25% -0.611510 -0.600794 -1.368714 -1.076610
  9. 50% 0.022070 -0.228039 -0.767252 -0.386188
  10. 75% 0.658444 0.041933 -0.034326 0.461706
  11. max 1.212112 0.567020 0.276232 1.071804

Transposing your data

  1. In [20]: df.T
  2. Out[20]:
  3. 2013-01-01 2013-01-02 2013-01-03 2013-01-04 2013-01-05 2013-01-06
  4. A 0.469112 1.212112 -0.861849 0.721555 -0.424972 -0.673690
  5. B -0.282863 -0.173215 -2.104569 -0.706771 0.567020 0.113648
  6. C -1.509059 0.119209 -0.494929 -1.039575 0.276232 -1.478427
  7. D -1.135632 -1.044236 1.071804 0.271860 -1.087401 0.524988

Sorting by an axis

  1. In [21]: df.sort_index(axis=1, ascending=False)
  2. Out[21]:
  3. D C B A
  4. 2013-01-01 -1.135632 -1.509059 -0.282863 0.469112
  5. 2013-01-02 -1.044236 0.119209 -0.173215 1.212112
  6. 2013-01-03 1.071804 -0.494929 -2.104569 -0.861849
  7. 2013-01-04 0.271860 -1.039575 -0.706771 0.721555
  8. 2013-01-05 -1.087401 0.276232 0.567020 -0.424972
  9. 2013-01-06 0.524988 -1.478427 0.113648 -0.673690

Sorting by values

  1. In [22]: df.sort_values(by='B')
  2. Out[22]:
  3. A B C D
  4. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
  5. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860
  6. 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
  7. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  8. 2013-01-06 -0.673690 0.113648 -1.478427 0.524988
  9. 2013-01-05 -0.424972 0.567020 0.276232 -1.087401

Selection

Note

While standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for interactive work, for production code, we recommend the optimized pandas data access methods, .at, .iat, .loc, .iloc and .ix.

See the indexing documentation Indexing and Selecting Data and MultiIndex / Advanced Indexing

Getting

Selecting a single column, which yields a Series, equivalent to df.A

  1. In [23]: df['A']
  2. Out[23]:
  3. 2013-01-01 0.469112
  4. 2013-01-02 1.212112
  5. 2013-01-03 -0.861849
  6. 2013-01-04 0.721555
  7. 2013-01-05 -0.424972
  8. 2013-01-06 -0.673690
  9. Freq: D, Name: A, dtype: float64

Selecting via [], which slices the rows.

  1. In [24]: df[0:3]
  2. Out[24]:
  3. A B C D
  4. 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
  5. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  6. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
  7.  
  8. In [25]: df['20130102':'20130104']
  9. Out[25]:
  10. A B C D
  11. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  12. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
  13. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860

Selection by Label

See more in Selection by Label

For getting a cross section using a label

  1. In [26]: df.loc[dates[0]]
  2. Out[26]:
  3. A 0.469112
  4. B -0.282863
  5. C -1.509059
  6. D -1.135632
  7. Name: 2013-01-01 00:00:00, dtype: float64

Selecting on a multi-axis by label

  1. In [27]: df.loc[:,['A','B']]
  2. Out[27]:
  3. A B
  4. 2013-01-01 0.469112 -0.282863
  5. 2013-01-02 1.212112 -0.173215
  6. 2013-01-03 -0.861849 -2.104569
  7. 2013-01-04 0.721555 -0.706771
  8. 2013-01-05 -0.424972 0.567020
  9. 2013-01-06 -0.673690 0.113648

Showing label slicing, both endpoints are included

  1. In [28]: df.loc['20130102':'20130104',['A','B']]
  2. Out[28]:
  3. A B
  4. 2013-01-02 1.212112 -0.173215
  5. 2013-01-03 -0.861849 -2.104569
  6. 2013-01-04 0.721555 -0.706771

Reduction in the dimensions of the returned object

  1. In [29]: df.loc['20130102',['A','B']]
  2. Out[29]:
  3. A 1.212112
  4. B -0.173215
  5. Name: 2013-01-02 00:00:00, dtype: float64

For getting a scalar value

  1. In [30]: df.loc[dates[0],'A']
  2. Out[30]: 0.46911229990718628

For getting fast access to a scalar (equiv to the prior method)

  1. In [31]: df.at[dates[0],'A']
  2. Out[31]: 0.46911229990718628

Selection by Position

See more in Selection by Position

Select via the position of the passed integers

  1. In [32]: df.iloc[3]
  2. Out[32]:
  3. A 0.721555
  4. B -0.706771
  5. C -1.039575
  6. D 0.271860
  7. Name: 2013-01-04 00:00:00, dtype: float64

By integer slices, acting similar to numpy/python

  1. In [33]: df.iloc[3:5,0:2]
  2. Out[33]:
  3. A B
  4. 2013-01-04 0.721555 -0.706771
  5. 2013-01-05 -0.424972 0.567020

By lists of integer position locations, similar to the numpy/python style

  1. In [34]: df.iloc[[1,2,4],[0,2]]
  2. Out[34]:
  3. A C
  4. 2013-01-02 1.212112 0.119209
  5. 2013-01-03 -0.861849 -0.494929
  6. 2013-01-05 -0.424972 0.276232

For slicing rows explicitly

  1. In [35]: df.iloc[1:3,:]
  2. Out[35]:
  3. A B C D
  4. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  5. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804

For slicing columns explicitly

  1. In [36]: df.iloc[:,1:3]
  2. Out[36]:
  3. B C
  4. 2013-01-01 -0.282863 -1.509059
  5. 2013-01-02 -0.173215 0.119209
  6. 2013-01-03 -2.104569 -0.494929
  7. 2013-01-04 -0.706771 -1.039575
  8. 2013-01-05 0.567020 0.276232
  9. 2013-01-06 0.113648 -1.478427

For getting a value explicitly

  1. In [37]: df.iloc[1,1]
  2. Out[37]: -0.17321464905330858

For getting fast access to a scalar (equiv to the prior method)

  1. In [38]: df.iat[1,1]
  2. Out[38]: -0.17321464905330858

Boolean Indexing

Using a single column’s values to select data.

  1. In [39]: df[df.A > 0]
  2. Out[39]:
  3. A B C D
  4. 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
  5. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  6. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860

Selecting values from a DataFrame where a boolean condition is met.

  1. In [40]: df[df > 0]
  2. Out[40]:
  3. A B C D
  4. 2013-01-01 0.469112 NaN NaN NaN
  5. 2013-01-02 1.212112 NaN 0.119209 NaN
  6. 2013-01-03 NaN NaN NaN 1.071804
  7. 2013-01-04 0.721555 NaN NaN 0.271860
  8. 2013-01-05 NaN 0.567020 0.276232 NaN
  9. 2013-01-06 NaN 0.113648 NaN 0.524988

Using the isin() method for filtering:

  1. In [41]: df2 = df.copy()
  2.  
  3. In [42]: df2['E'] = ['one', 'one','two','three','four','three']
  4.  
  5. In [43]: df2
  6. Out[43]:
  7. A B C D E
  8. 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632 one
  9. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236 one
  10. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804 two
  11. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860 three
  12. 2013-01-05 -0.424972 0.567020 0.276232 -1.087401 four
  13. 2013-01-06 -0.673690 0.113648 -1.478427 0.524988 three
  14.  
  15. In [44]: df2[df2['E'].isin(['two','four'])]
  16. Out[44]:
  17. A B C D E
  18. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804 two
  19. 2013-01-05 -0.424972 0.567020 0.276232 -1.087401 four

Setting

Setting a new column automatically aligns the data by the indexes

  1. In [45]: s1 = pd.Series([1,2,3,4,5,6], index=pd.date_range('20130102', periods=6))
  2.  
  3. In [46]: s1
  4. Out[46]:
  5. 2013-01-02 1
  6. 2013-01-03 2
  7. 2013-01-04 3
  8. 2013-01-05 4
  9. 2013-01-06 5
  10. 2013-01-07 6
  11. Freq: D, dtype: int64
  12.  
  13. In [47]: df['F'] = s1

Setting values by label

  1. In [48]: df.at[dates[0],'A'] = 0

Setting values by position

  1. In [49]: df.iat[0,1] = 0

Setting by assigning with a numpy array

  1. In [50]: df.loc[:,'D'] = np.array([5] * len(df))

The result of the prior setting operations

  1. In [51]: df
  2. Out[51]:
  3. A B C D F
  4. 2013-01-01 0.000000 0.000000 -1.509059 5 NaN
  5. 2013-01-02 1.212112 -0.173215 0.119209 5 1.0
  6. 2013-01-03 -0.861849 -2.104569 -0.494929 5 2.0
  7. 2013-01-04 0.721555 -0.706771 -1.039575 5 3.0
  8. 2013-01-05 -0.424972 0.567020 0.276232 5 4.0
  9. 2013-01-06 -0.673690 0.113648 -1.478427 5 5.0

A where operation with setting.

  1. In [52]: df2 = df.copy()
  2.  
  3. In [53]: df2[df2 > 0] = -df2
  4.  
  5. In [54]: df2
  6. Out[54]:
  7. A B C D F
  8. 2013-01-01 0.000000 0.000000 -1.509059 -5 NaN
  9. 2013-01-02 -1.212112 -0.173215 -0.119209 -5 -1.0
  10. 2013-01-03 -0.861849 -2.104569 -0.494929 -5 -2.0
  11. 2013-01-04 -0.721555 -0.706771 -1.039575 -5 -3.0
  12. 2013-01-05 -0.424972 -0.567020 -0.276232 -5 -4.0
  13. 2013-01-06 -0.673690 -0.113648 -1.478427 -5 -5.0

Missing Data

pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See the Missing Data section

Reindexing allows you to change/add/delete the index on a specified axis. This returns a copy of the data.

  1. In [55]: df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E'])
  2.  
  3. In [56]: df1.loc[dates[0]:dates[1],'E'] = 1
  4.  
  5. In [57]: df1
  6. Out[57]:
  7. A B C D F E
  8. 2013-01-01 0.000000 0.000000 -1.509059 5 NaN 1.0
  9. 2013-01-02 1.212112 -0.173215 0.119209 5 1.0 1.0
  10. 2013-01-03 -0.861849 -2.104569 -0.494929 5 2.0 NaN
  11. 2013-01-04 0.721555 -0.706771 -1.039575 5 3.0 NaN

To drop any rows that have missing data.

  1. In [58]: df1.dropna(how='any')
  2. Out[58]:
  3. A B C D F E
  4. 2013-01-02 1.212112 -0.173215 0.119209 5 1.0 1.0

Filling missing data

  1. In [59]: df1.fillna(value=5)
  2. Out[59]:
  3. A B C D F E
  4. 2013-01-01 0.000000 0.000000 -1.509059 5 5.0 1.0
  5. 2013-01-02 1.212112 -0.173215 0.119209 5 1.0 1.0
  6. 2013-01-03 -0.861849 -2.104569 -0.494929 5 2.0 5.0
  7. 2013-01-04 0.721555 -0.706771 -1.039575 5 3.0 5.0

To get the boolean mask where values are nan

  1. In [60]: pd.isnull(df1)
  2. Out[60]:
  3. A B C D F E
  4. 2013-01-01 False False False False True False
  5. 2013-01-02 False False False False False False
  6. 2013-01-03 False False False False False True
  7. 2013-01-04 False False False False False True

Operations

See the Basic section on Binary Ops

Stats

Operations in general exclude missing data.

Performing a descriptive statistic

  1. In [61]: df.mean()
  2. Out[61]:
  3. A -0.004474
  4. B -0.383981
  5. C -0.687758
  6. D 5.000000
  7. F 3.000000
  8. dtype: float64

Same operation on the other axis

  1. In [62]: df.mean(1)
  2. Out[62]:
  3. 2013-01-01 0.872735
  4. 2013-01-02 1.431621
  5. 2013-01-03 0.707731
  6. 2013-01-04 1.395042
  7. 2013-01-05 1.883656
  8. 2013-01-06 1.592306
  9. Freq: D, dtype: float64

Operating with objects that have different dimensionality and need alignment. In addition, pandas automatically broadcasts along the specified dimension.

  1. In [63]: s = pd.Series([1,3,5,np.nan,6,8], index=dates).shift(2)
  2.  
  3. In [64]: s
  4. Out[64]:
  5. 2013-01-01 NaN
  6. 2013-01-02 NaN
  7. 2013-01-03 1.0
  8. 2013-01-04 3.0
  9. 2013-01-05 5.0
  10. 2013-01-06 NaN
  11. Freq: D, dtype: float64
  12.  
  13. In [65]: df.sub(s, axis='index')
  14. Out[65]:
  15. A B C D F
  16. 2013-01-01 NaN NaN NaN NaN NaN
  17. 2013-01-02 NaN NaN NaN NaN NaN
  18. 2013-01-03 -1.861849 -3.104569 -1.494929 4.0 1.0
  19. 2013-01-04 -2.278445 -3.706771 -4.039575 2.0 0.0
  20. 2013-01-05 -5.424972 -4.432980 -4.723768 0.0 -1.0
  21. 2013-01-06 NaN NaN NaN NaN NaN

Apply

Applying functions to the data

  1. In [66]: df.apply(np.cumsum)
  2. Out[66]:
  3. A B C D F
  4. 2013-01-01 0.000000 0.000000 -1.509059 5 NaN
  5. 2013-01-02 1.212112 -0.173215 -1.389850 10 1.0
  6. 2013-01-03 0.350263 -2.277784 -1.884779 15 3.0
  7. 2013-01-04 1.071818 -2.984555 -2.924354 20 6.0
  8. 2013-01-05 0.646846 -2.417535 -2.648122 25 10.0
  9. 2013-01-06 -0.026844 -2.303886 -4.126549 30 15.0
  10.  
  11. In [67]: df.apply(lambda x: x.max() - x.min())
  12. Out[67]:
  13. A 2.073961
  14. B 2.671590
  15. C 1.785291
  16. D 0.000000
  17. F 4.000000
  18. dtype: float64

Histogramming

See more at Histogramming and Discretization

  1. In [68]: s = pd.Series(np.random.randint(0, 7, size=10))
  2.  
  3. In [69]: s
  4. Out[69]:
  5. 0 4
  6. 1 2
  7. 2 1
  8. 3 2
  9. 4 6
  10. 5 4
  11. 6 4
  12. 7 6
  13. 8 4
  14. 9 4
  15. dtype: int64
  16.  
  17. In [70]: s.value_counts()
  18. Out[70]:
  19. 4 5
  20. 6 2
  21. 2 2
  22. 1 1
  23. dtype: int64

String Methods

Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions by default (and in some cases always uses them). See more at Vectorized String Methods.

  1. In [71]: s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
  2.  
  3. In [72]: s.str.lower()
  4. Out[72]:
  5. 0 a
  6. 1 b
  7. 2 c
  8. 3 aaba
  9. 4 baca
  10. 5 NaN
  11. 6 caba
  12. 7 dog
  13. 8 cat
  14. dtype: object

Merge

Concat

pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.

See the Merging section

Concatenating pandas objects together with concat():

  1. In [73]: df = pd.DataFrame(np.random.randn(10, 4))
  2.  
  3. In [74]: df
  4. Out[74]:
  5. 0 1 2 3
  6. 0 -0.548702 1.467327 -1.015962 -0.483075
  7. 1 1.637550 -1.217659 -0.291519 -1.745505
  8. 2 -0.263952 0.991460 -0.919069 0.266046
  9. 3 -0.709661 1.669052 1.037882 -1.705775
  10. 4 -0.919854 -0.042379 1.247642 -0.009920
  11. 5 0.290213 0.495767 0.362949 1.548106
  12. 6 -1.131345 -0.089329 0.337863 -0.945867
  13. 7 -0.932132 1.956030 0.017587 -0.016692
  14. 8 -0.575247 0.254161 -1.143704 0.215897
  15. 9 1.193555 -0.077118 -0.408530 -0.862495
  16.  
  17. # break it into pieces
  18. In [75]: pieces = [df[:3], df[3:7], df[7:]]
  19.  
  20. In [76]: pd.concat(pieces)
  21. Out[76]:
  22. 0 1 2 3
  23. 0 -0.548702 1.467327 -1.015962 -0.483075
  24. 1 1.637550 -1.217659 -0.291519 -1.745505
  25. 2 -0.263952 0.991460 -0.919069 0.266046
  26. 3 -0.709661 1.669052 1.037882 -1.705775
  27. 4 -0.919854 -0.042379 1.247642 -0.009920
  28. 5 0.290213 0.495767 0.362949 1.548106
  29. 6 -1.131345 -0.089329 0.337863 -0.945867
  30. 7 -0.932132 1.956030 0.017587 -0.016692
  31. 8 -0.575247 0.254161 -1.143704 0.215897
  32. 9 1.193555 -0.077118 -0.408530 -0.862495

Join

SQL style merges. See the Database style joining

  1. In [77]: left = pd.DataFrame({'key': ['foo', 'foo'], 'lval': [1, 2]})
  2.  
  3. In [78]: right = pd.DataFrame({'key': ['foo', 'foo'], 'rval': [4, 5]})
  4.  
  5. In [79]: left
  6. Out[79]:
  7. key lval
  8. 0 foo 1
  9. 1 foo 2
  10.  
  11. In [80]: right
  12. Out[80]:
  13. key rval
  14. 0 foo 4
  15. 1 foo 5
  16.  
  17. In [81]: pd.merge(left, right, on='key')
  18. Out[81]:
  19. key lval rval
  20. 0 foo 1 4
  21. 1 foo 1 5
  22. 2 foo 2 4
  23. 3 foo 2 5

Another example that can be given is:

  1. In [82]: left = pd.DataFrame({'key': ['foo', 'bar'], 'lval': [1, 2]})
  2.  
  3. In [83]: right = pd.DataFrame({'key': ['foo', 'bar'], 'rval': [4, 5]})
  4.  
  5. In [84]: left
  6. Out[84]:
  7. key lval
  8. 0 foo 1
  9. 1 bar 2
  10.  
  11. In [85]: right
  12. Out[85]:
  13. key rval
  14. 0 foo 4
  15. 1 bar 5
  16.  
  17. In [86]: pd.merge(left, right, on='key')
  18. Out[86]:
  19. key lval rval
  20. 0 foo 1 4
  21. 1 bar 2 5

Append

Append rows to a dataframe. See the Appending

  1. In [87]: df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
  2.  
  3. In [88]: df
  4. Out[88]:
  5. A B C D
  6. 0 1.346061 1.511763 1.627081 -0.990582
  7. 1 -0.441652 1.211526 0.268520 0.024580
  8. 2 -1.577585 0.396823 -0.105381 -0.532532
  9. 3 1.453749 1.208843 -0.080952 -0.264610
  10. 4 -0.727965 -0.589346 0.339969 -0.693205
  11. 5 -0.339355 0.593616 0.884345 1.591431
  12. 6 0.141809 0.220390 0.435589 0.192451
  13. 7 -0.096701 0.803351 1.715071 -0.708758
  14.  
  15. In [89]: s = df.iloc[3]
  16.  
  17. In [90]: df.append(s, ignore_index=True)
  18. Out[90]:
  19. A B C D
  20. 0 1.346061 1.511763 1.627081 -0.990582
  21. 1 -0.441652 1.211526 0.268520 0.024580
  22. 2 -1.577585 0.396823 -0.105381 -0.532532
  23. 3 1.453749 1.208843 -0.080952 -0.264610
  24. 4 -0.727965 -0.589346 0.339969 -0.693205
  25. 5 -0.339355 0.593616 0.884345 1.591431
  26. 6 0.141809 0.220390 0.435589 0.192451
  27. 7 -0.096701 0.803351 1.715071 -0.708758
  28. 8 1.453749 1.208843 -0.080952 -0.264610

Grouping

By “group by” we are referring to a process involving one or more of the following steps

  • Splitting the data into groups based on some criteria
  • Applying a function to each group independently
  • Combining the results into a data structure

See the Grouping section

  1. In [91]: df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
  2. ....: 'foo', 'bar', 'foo', 'foo'],
  3. ....: 'B' : ['one', 'one', 'two', 'three',
  4. ....: 'two', 'two', 'one', 'three'],
  5. ....: 'C' : np.random.randn(8),
  6. ....: 'D' : np.random.randn(8)})
  7. ....:
  8.  
  9. In [92]: df
  10. Out[92]:
  11. A B C D
  12. 0 foo one -1.202872 -0.055224
  13. 1 bar one -1.814470 2.395985
  14. 2 foo two 1.018601 1.552825
  15. 3 bar three -0.595447 0.166599
  16. 4 foo two 1.395433 0.047609
  17. 5 bar two -0.392670 -0.136473
  18. 6 foo one 0.007207 -0.561757
  19. 7 foo three 1.928123 -1.623033

Grouping and then applying a function sum to the resulting groups.

  1. In [93]: df.groupby('A').sum()
  2. Out[93]:
  3. C D
  4. A
  5. bar -2.802588 2.42611
  6. foo 3.146492 -0.63958

Grouping by multiple columns forms a hierarchical index, which we then apply the function.

  1. In [94]: df.groupby(['A','B']).sum()
  2. Out[94]:
  3. C D
  4. A B
  5. bar one -1.814470 2.395985
  6. three -0.595447 0.166599
  7. two -0.392670 -0.136473
  8. foo one -1.195665 -0.616981
  9. three 1.928123 -1.623033
  10. two 2.414034 1.600434

Reshaping

See the sections on Hierarchical Indexing and Reshaping.

Stack

  1. In [95]: tuples = list(zip(*[['bar', 'bar', 'baz', 'baz',
  2. ....: 'foo', 'foo', 'qux', 'qux'],
  3. ....: ['one', 'two', 'one', 'two',
  4. ....: 'one', 'two', 'one', 'two']]))
  5. ....:
  6.  
  7. In [96]: index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
  8.  
  9. In [97]: df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=['A', 'B'])
  10.  
  11. In [98]: df2 = df[:4]
  12.  
  13. In [99]: df2
  14. Out[99]:
  15. A B
  16. first second
  17. bar one 0.029399 -0.542108
  18. two 0.282696 -0.087302
  19. baz one -1.575170 1.771208
  20. two 0.816482 1.100230

The stack() method “compresses” a level in the DataFrame’s columns.

  1. In [100]: stacked = df2.stack()
  2.  
  3. In [101]: stacked
  4. Out[101]:
  5. first second
  6. bar one A 0.029399
  7. B -0.542108
  8. two A 0.282696
  9. B -0.087302
  10. baz one A -1.575170
  11. B 1.771208
  12. two A 0.816482
  13. B 1.100230
  14. dtype: float64

With a “stacked” DataFrame or Series (having a MultiIndex as the index), the inverse operation of stack() is unstack(), which by default unstacks the last level:

  1. In [102]: stacked.unstack()
  2. Out[102]:
  3. A B
  4. first second
  5. bar one 0.029399 -0.542108
  6. two 0.282696 -0.087302
  7. baz one -1.575170 1.771208
  8. two 0.816482 1.100230
  9.  
  10. In [103]: stacked.unstack(1)
  11. Out[103]:
  12. second one two
  13. first
  14. bar A 0.029399 0.282696
  15. B -0.542108 -0.087302
  16. baz A -1.575170 0.816482
  17. B 1.771208 1.100230
  18.  
  19. In [104]: stacked.unstack(0)
  20. Out[104]:
  21. first bar baz
  22. second
  23. one A 0.029399 -1.575170
  24. B -0.542108 1.771208
  25. two A 0.282696 0.816482
  26. B -0.087302 1.100230

Pivot Tables

See the section on Pivot Tables.

  1. In [105]: df = pd.DataFrame({'A' : ['one', 'one', 'two', 'three'] * 3,
  2. .....: 'B' : ['A', 'B', 'C'] * 4,
  3. .....: 'C' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
  4. .....: 'D' : np.random.randn(12),
  5. .....: 'E' : np.random.randn(12)})
  6. .....:
  7.  
  8. In [106]: df
  9. Out[106]:
  10. A B C D E
  11. 0 one A foo 1.418757 -0.179666
  12. 1 one B foo -1.879024 1.291836
  13. 2 two C foo 0.536826 -0.009614
  14. 3 three A bar 1.006160 0.392149
  15. 4 one B bar -0.029716 0.264599
  16. 5 one C bar -1.146178 -0.057409
  17. 6 two A foo 0.100900 -1.425638
  18. 7 three B foo -1.035018 1.024098
  19. 8 one C foo 0.314665 -0.106062
  20. 9 one A bar -0.773723 1.824375
  21. 10 two B bar -1.170653 0.595974
  22. 11 three C bar 0.648740 1.167115

We can produce pivot tables from this data very easily:

  1. In [107]: pd.pivot_table(df, values='D', index=['A', 'B'], columns=['C'])
  2. Out[107]:
  3. C bar foo
  4. A B
  5. one A -0.773723 1.418757
  6. B -0.029716 -1.879024
  7. C -1.146178 0.314665
  8. three A 1.006160 NaN
  9. B NaN -1.035018
  10. C 0.648740 NaN
  11. two A NaN 0.100900
  12. B -1.170653 NaN
  13. C NaN 0.536826

Time Series

pandas has simple, powerful, and efficient functionality for performing resampling operations during frequency conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial applications. See the Time Series section

  1. In [108]: rng = pd.date_range('1/1/2012', periods=100, freq='S')
  2.  
  3. In [109]: ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
  4.  
  5. In [110]: ts.resample('5Min').sum()
  6. Out[110]:
  7. 2012-01-01 25083
  8. Freq: 5T, dtype: int64

Time zone representation

  1. In [111]: rng = pd.date_range('3/6/2012 00:00', periods=5, freq='D')
  2.  
  3. In [112]: ts = pd.Series(np.random.randn(len(rng)), rng)
  4.  
  5. In [113]: ts
  6. Out[113]:
  7. 2012-03-06 0.464000
  8. 2012-03-07 0.227371
  9. 2012-03-08 -0.496922
  10. 2012-03-09 0.306389
  11. 2012-03-10 -2.290613
  12. Freq: D, dtype: float64
  13.  
  14. In [114]: ts_utc = ts.tz_localize('UTC')
  15.  
  16. In [115]: ts_utc
  17. Out[115]:
  18. 2012-03-06 00:00:00+00:00 0.464000
  19. 2012-03-07 00:00:00+00:00 0.227371
  20. 2012-03-08 00:00:00+00:00 -0.496922
  21. 2012-03-09 00:00:00+00:00 0.306389
  22. 2012-03-10 00:00:00+00:00 -2.290613
  23. Freq: D, dtype: float64

Convert to another time zone

  1. In [116]: ts_utc.tz_convert('US/Eastern')
  2. Out[116]:
  3. 2012-03-05 19:00:00-05:00 0.464000
  4. 2012-03-06 19:00:00-05:00 0.227371
  5. 2012-03-07 19:00:00-05:00 -0.496922
  6. 2012-03-08 19:00:00-05:00 0.306389
  7. 2012-03-09 19:00:00-05:00 -2.290613
  8. Freq: D, dtype: float64

Converting between time span representations

  1. In [117]: rng = pd.date_range('1/1/2012', periods=5, freq='M')
  2.  
  3. In [118]: ts = pd.Series(np.random.randn(len(rng)), index=rng)
  4.  
  5. In [119]: ts
  6. Out[119]:
  7. 2012-01-31 -1.134623
  8. 2012-02-29 -1.561819
  9. 2012-03-31 -0.260838
  10. 2012-04-30 0.281957
  11. 2012-05-31 1.523962
  12. Freq: M, dtype: float64
  13.  
  14. In [120]: ps = ts.to_period()
  15.  
  16. In [121]: ps
  17. Out[121]:
  18. 2012-01 -1.134623
  19. 2012-02 -1.561819
  20. 2012-03 -0.260838
  21. 2012-04 0.281957
  22. 2012-05 1.523962
  23. Freq: M, dtype: float64
  24.  
  25. In [122]: ps.to_timestamp()
  26. Out[122]:
  27. 2012-01-01 -1.134623
  28. 2012-02-01 -1.561819
  29. 2012-03-01 -0.260838
  30. 2012-04-01 0.281957
  31. 2012-05-01 1.523962
  32. Freq: MS, dtype: float64

Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following the quarter end:

  1. In [123]: prng = pd.period_range('1990Q1', '2000Q4', freq='Q-NOV')
  2.  
  3. In [124]: ts = pd.Series(np.random.randn(len(prng)), prng)
  4.  
  5. In [125]: ts.index = (prng.asfreq('M', 'e') + 1).asfreq('H', 's') + 9
  6.  
  7. In [126]: ts.head()
  8. Out[126]:
  9. 1990-03-01 09:00 -0.902937
  10. 1990-06-01 09:00 0.068159
  11. 1990-09-01 09:00 -0.057873
  12. 1990-12-01 09:00 -0.368204
  13. 1991-03-01 09:00 -1.144073
  14. Freq: H, dtype: float64

Categoricals

Since version 0.15, pandas can include categorical data in a DataFrame. For full docs, see the categorical introduction and the API documentation.

  1. In [127]: df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']})

Convert the raw grades to a categorical data type.

  1. In [128]: df["grade"] = df["raw_grade"].astype("category")
  2.  
  3. In [129]: df["grade"]
  4. Out[129]:
  5. 0 a
  6. 1 b
  7. 2 b
  8. 3 a
  9. 4 a
  10. 5 e
  11. Name: grade, dtype: category
  12. Categories (3, object): [a, b, e]

Rename the categories to more meaningful names (assigning to Series.cat.categories is inplace!)

  1. In [130]: df["grade"].cat.categories = ["very good", "good", "very bad"]

Reorder the categories and simultaneously add the missing categories (methods under Series .cat return a new Series per default).

  1. In [131]: df["grade"] = df["grade"].cat.set_categories(["very bad", "bad", "medium", "good", "very good"])
  2.  
  3. In [132]: df["grade"]
  4. Out[132]:
  5. 0 very good
  6. 1 good
  7. 2 good
  8. 3 very good
  9. 4 very good
  10. 5 very bad
  11. Name: grade, dtype: category
  12. Categories (5, object): [very bad, bad, medium, good, very good]

Sorting is per order in the categories, not lexical order.

  1. In [133]: df.sort_values(by="grade")
  2. Out[133]:
  3. id raw_grade grade
  4. 5 6 e very bad
  5. 1 2 b good
  6. 2 3 b good
  7. 0 1 a very good
  8. 3 4 a very good
  9. 4 5 a very good

Grouping by a categorical column shows also empty categories.

  1. In [134]: df.groupby("grade").size()
  2. Out[134]:
  3. grade
  4. very bad 1
  5. bad 0
  6. medium 0
  7. good 2
  8. very good 3
  9. dtype: int64

Plotting

Plotting docs.

  1. In [135]: ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
  2.  
  3. In [136]: ts = ts.cumsum()
  4.  
  5. In [137]: ts.plot()
  6. Out[137]: <matplotlib.axes._subplots.AxesSubplot at 0x1187d7278>

On DataFrame, plot() is a convenience to plot all of the columns with labels:

  1. In [138]: df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index,
  2. .....: columns=['A', 'B', 'C', 'D'])
  3. .....:
  4.  
  5. In [139]: df = df.cumsum()
  6.  
  7. In [140]: plt.figure(); df.plot(); plt.legend(loc='best')
  8. Out[140]: <matplotlib.legend.Legend at 0x11b5dea20>

Getting Data In/Out

CSV

Writing to a csv file

  1. In [141]: df.to_csv('foo.csv')

Reading from a csv file

  1. In [142]: pd.read_csv('foo.csv')
  2. Out[142]:
  3. Unnamed: 0 A B C D
  4. 0 2000-01-01 0.266457 -0.399641 -0.219582 1.186860
  5. 1 2000-01-02 -1.170732 -0.345873 1.653061 -0.282953
  6. 2 2000-01-03 -1.734933 0.530468 2.060811 -0.515536
  7. 3 2000-01-04 -1.555121 1.452620 0.239859 -1.156896
  8. 4 2000-01-05 0.578117 0.511371 0.103552 -2.428202
  9. 5 2000-01-06 0.478344 0.449933 -0.741620 -1.962409
  10. 6 2000-01-07 1.235339 -0.091757 -1.543861 -1.084753
  11. .. ... ... ... ... ...
  12. 993 2002-09-20 -10.628548 -9.153563 -7.883146 28.313940
  13. 994 2002-09-21 -10.390377 -8.727491 -6.399645 30.914107
  14. 995 2002-09-22 -8.985362 -8.485624 -4.669462 31.367740
  15. 996 2002-09-23 -9.558560 -8.781216 -4.499815 30.518439
  16. 997 2002-09-24 -9.902058 -9.340490 -4.386639 30.105593
  17. 998 2002-09-25 -10.216020 -9.480682 -3.933802 29.758560
  18. 999 2002-09-26 -11.856774 -10.671012 -3.216025 29.369368
  19.  
  20. [1000 rows x 5 columns]

HDF5

Reading and writing to HDFStores

Writing to a HDF5 Store

  1. In [143]: df.to_hdf('foo.h5','df')

Reading from a HDF5 Store

  1. In [144]: pd.read_hdf('foo.h5','df')
  2. Out[144]:
  3. A B C D
  4. 2000-01-01 0.266457 -0.399641 -0.219582 1.186860
  5. 2000-01-02 -1.170732 -0.345873 1.653061 -0.282953
  6. 2000-01-03 -1.734933 0.530468 2.060811 -0.515536
  7. 2000-01-04 -1.555121 1.452620 0.239859 -1.156896
  8. 2000-01-05 0.578117 0.511371 0.103552 -2.428202
  9. 2000-01-06 0.478344 0.449933 -0.741620 -1.962409
  10. 2000-01-07 1.235339 -0.091757 -1.543861 -1.084753
  11. ... ... ... ... ...
  12. 2002-09-20 -10.628548 -9.153563 -7.883146 28.313940
  13. 2002-09-21 -10.390377 -8.727491 -6.399645 30.914107
  14. 2002-09-22 -8.985362 -8.485624 -4.669462 31.367740
  15. 2002-09-23 -9.558560 -8.781216 -4.499815 30.518439
  16. 2002-09-24 -9.902058 -9.340490 -4.386639 30.105593
  17. 2002-09-25 -10.216020 -9.480682 -3.933802 29.758560
  18. 2002-09-26 -11.856774 -10.671012 -3.216025 29.369368
  19.  
  20. [1000 rows x 4 columns]

Excel

Reading and writing to MS Excel

Writing to an excel file

  1. In [145]: df.to_excel('foo.xlsx', sheet_name='Sheet1')

Reading from an excel file

  1. In [146]: pd.read_excel('foo.xlsx', 'Sheet1', index_col=None, na_values=['NA'])
  2. Out[146]:
  3. A B C D
  4. 2000-01-01 0.266457 -0.399641 -0.219582 1.186860
  5. 2000-01-02 -1.170732 -0.345873 1.653061 -0.282953
  6. 2000-01-03 -1.734933 0.530468 2.060811 -0.515536
  7. 2000-01-04 -1.555121 1.452620 0.239859 -1.156896
  8. 2000-01-05 0.578117 0.511371 0.103552 -2.428202
  9. 2000-01-06 0.478344 0.449933 -0.741620 -1.962409
  10. 2000-01-07 1.235339 -0.091757 -1.543861 -1.084753
  11. ... ... ... ... ...
  12. 2002-09-20 -10.628548 -9.153563 -7.883146 28.313940
  13. 2002-09-21 -10.390377 -8.727491 -6.399645 30.914107
  14. 2002-09-22 -8.985362 -8.485624 -4.669462 31.367740
  15. 2002-09-23 -9.558560 -8.781216 -4.499815 30.518439
  16. 2002-09-24 -9.902058 -9.340490 -4.386639 30.105593
  17. 2002-09-25 -10.216020 -9.480682 -3.933802 29.758560
  18. 2002-09-26 -11.856774 -10.671012 -3.216025 29.369368
  19.  
  20. [1000 rows x 4 columns]

Gotchas

If you are trying an operation and you see an exception like:

  1. >>> if pd.Series([False, True, False]):
  2. print("I was true")
  3. Traceback
  4. ...
  5. ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.all().

See Comparisons for an explanation and what to do.

See Gotchas as well.

pandas入门10分钟——serries其实就是data frame的一列数据的更多相关文章

  1. python scrapy 入门,10分钟完成一个爬虫

    在TensorFlow热起来之前,很多人学习python的原因是因为想写爬虫.的确,有着丰富第三方库的python很适合干这种工作. Scrapy是一个易学易用的爬虫框架,尽管因为互联网多变的复杂性仍 ...

  2. TTS-零基础入门-10分钟教你做一个语音功能

    在本片博客正式開始之前,大家先跟我做一个简单的好玩的 小语音. 新建一个文本文档,然后再文档里输入这样 一句话  CreateObject("SAPI.SpVoice").Spea ...

  3. python爬虫入门10分钟爬取一个网站

    一.基础入门 1.1什么是爬虫 爬虫(spider,又网络爬虫),是指向网站/网络发起请求,获取资源后分析并提取有用数据的程序. 从技术层面来说就是 通过程序模拟浏览器请求站点的行为,把站点返回的HT ...

  4. 干货 | 10分钟带你彻底了解column generation(列生成)算法的原理附java代码

    OUTLINE 前言 预备知识预警 什么是column generation 相关概念科普 Cutting Stock Problem CG求解Cutting Stock Problem 列生成代码 ...

  5. 10分钟完成一个最最简单的BLE蓝牙接收数据的DEMO

    这两天在研究蓝牙,网上有关蓝牙的内容非常有限,Github上的蓝牙框架也很少很复杂,为此我特地写了一个最最简单的DEMO,实现BLE蓝牙接收数据的问题, 不需要什么特定的UUID, 不需要什么断开重连 ...

  6. 提取data.frame中的部分数据(不含列标题和行标题)

    ?unlist     Given a list structure x, unlist simplifies it to produce a vector which contains all th ...

  7. Python数据分析Pandas库之熊猫(10分钟一)

    pandas熊猫10分钟教程 排序 df.sort_index(axis=0/1,ascending=False/True) df.sort_values(by='列名') import numpy ...

  8. php 实现密码错误三次锁定账号10分钟

    /** * 登录 * 1.接收数据 * 2.正则判断接收到的数据是否合理 * 3.根据用户名获取用户数据 * 获取到数据 -> 继续执行 * 没有获取到数据 -> 提示:用户名密码错误 * ...

  9. data.frame和matrix的一些操作

    编写脚本的时候经常会涉及到对data.frame或matrix类型数据的操作,比如取指定列.取指定行.排除指定列或行.根据条件取满足条件的列或行等.在R中,这些操作都是可以通过简单的一条语句就能够实现 ...

随机推荐

  1. ASP.NET-HTML.Helper常用方法

    Html.ActionLink方法 Html.ActionLink("linkText","actionName") Html.ActionLink(" ...

  2. Jquery-操作select下拉菜单

    jQuery获取Select选择的Text和Value: 1. var checkText=jQuery("#select_id").find("option:selec ...

  3. ASP.NET-internat身份验证

    ASP.NET-internat身份验证默认在webconfig中配置的代码是这样的 <system.web> <compilation debug="true" ...

  4. 练练脑,继续过Hard题目

    http://www.cnblogs.com/charlesblc/p/6384132.html 继续过Hard模式的题目吧.   # Title Editorial Acceptance Diffi ...

  5. HDU 4165

    一块药看成括号配对就行了.很明显的直接求卡特兰数. 今晚看了HDU 3240的题,有一点思路,但无情的TLE.想不到什么好方法了,看了别人的解答,哇...简直是天才的做法啊....留到星期六自己思考一 ...

  6. virtio netdev的创建

    Linux眼下支持至少了8种虚拟化系统: Xen KVM VMware's VMI IBM's System p IBM's System z User Mode Linux lguest IBM's ...

  7. COGS 2479 奇怪的姿势卡♂过去 (bitset+折半)

    思路: 此题显然是CDQ套CDQ套树套树 (然而我懒) 想用一种奇怪的姿势卡过去 就出现了以下解法 5w*5w/8的bitset hiahiahia 但是空间会爆怎么办啊- 折半~ 变成5w*2.5w ...

  8. struts2学习之基础笔记2

    6.5 Struts2 的基本配置 1web.xml 作用:加载核心过滤器 格式: <filter> ``````` </filter> <filter-mapping& ...

  9. Eclipse中將Java项目转变为Java Web项目

    1.在项目上点击右键=>properties,在Project Facets配置项中,勾选Dynamic Web Module.Java.JavaScript选项. 2.用记事本打开项目目录下的 ...

  10. Android RecyclerView 设置item间隔的方法

    RecyclerView大家常用,但是如何给加载出来的item增加间隔很多人都不知道,下面是方法,直接上代码了: LinearLayoutManager layoutManager = new Lin ...