--- title: Merge Data keywords: fastai sidebar: home_sidebar summary: "This notebook was made to demonstrate how to merge datasets by matching a single columns values from two datasets. We add columns of data from a foreign dataset into the ACS data we downloaded in our last tutorial." description: "This notebook was made to demonstrate how to merge datasets by matching a single columns values from two datasets. We add columns of data from a foreign dataset into the ACS data we downloaded in our last tutorial." ---
{% raw %}
/content/drive/My Drive/dataplay/dataplay/acsDownload.py:27: FutureWarning: Passing a negative integer is deprecated in version 1.0 and will not be supported in future version. Instead, use None to not limit the column width.
  pd.set_option('display.max_colwidth', -1)
/usr/local/lib/python3.6/dist-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
  """)
{% endraw %}

This Coding Notebook is the second in a series.

An Interactive version can be found here Open In Colab.

This colab and more can be found on our webpage.

  • Content covered in previous tutorials will be used in later tutorials.

  • New code and or information should have explanations and or descriptions attached.

  • Concepts or code covered in previous tutorials will be used without being explaining in entirety.

  • The Dataplay Handbook development techniques covered in the Datalabs Guidebook

  • If content can not be found in the current tutorial and is not covered in previous tutorials, please let me know.

  • This notebook has been optimized for Google Colabs ran on a Chrome Browser.

  • Statements found in the index page on view expressed, responsibility, errors and ommissions, use at risk, and licensing extend throughout the tutorial.

About this Tutorial:

Whats Inside?

The Tutorial

In this notebook, the basics of how to perform a merge are introduced.

  • We will merge two datasets
  • We will merge two datasets using a crosswalk

Objectives

By the end of this tutorial users should have an understanding of:

  • How dataset merges are performed
  • The types different union approaches a merge can take
  • The 'mergeData' function, and how to use it in the future

Guided Walkthrough

SETUP

Install these libraries onto the virtual environment.

{% raw %}
!pip install geopandas
!pip install dataplay
Collecting geopandas
  Downloading https://files.pythonhosted.org/packages/f8/dd/c0a6429cc7692efd5c99420c9df525c40f472b50705871a770449027e244/geopandas-0.8.0-py2.py3-none-any.whl (962kB)
     |████████████████████████████████| 962kB 2.8MB/s 
Collecting pyproj>=2.2.0
  Downloading https://files.pythonhosted.org/packages/e5/c3/071e080230ac4b6c64f1a2e2f9161c9737a2bc7b683d2c90b024825000c0/pyproj-2.6.1.post1-cp36-cp36m-manylinux2010_x86_64.whl (10.9MB)
     |████████████████████████████████| 10.9MB 18.4MB/s 
Requirement already satisfied: pandas>=0.23.0 in /usr/local/lib/python3.6/dist-packages (from geopandas) (1.0.5)
Requirement already satisfied: shapely in /usr/local/lib/python3.6/dist-packages (from geopandas) (1.7.0)
Collecting fiona
  Downloading https://files.pythonhosted.org/packages/ec/20/4e63bc5c6e62df889297b382c3ccd4a7a488b00946aaaf81a118158c6f09/Fiona-1.8.13.post1-cp36-cp36m-manylinux1_x86_64.whl (14.7MB)
     |████████████████████████████████| 14.7MB 251kB/s 
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.23.0->geopandas) (2018.9)
Requirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.23.0->geopandas) (2.8.1)
Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.23.0->geopandas) (1.18.5)
Requirement already satisfied: attrs>=17 in /usr/local/lib/python3.6/dist-packages (from fiona->geopandas) (19.3.0)
Requirement already satisfied: six>=1.7 in /usr/local/lib/python3.6/dist-packages (from fiona->geopandas) (1.12.0)
Collecting munch
  Downloading https://files.pythonhosted.org/packages/cc/ab/85d8da5c9a45e072301beb37ad7f833cd344e04c817d97e0cc75681d248f/munch-2.5.0-py2.py3-none-any.whl
Requirement already satisfied: click<8,>=4.0 in /usr/local/lib/python3.6/dist-packages (from fiona->geopandas) (7.1.2)
Collecting click-plugins>=1.0
  Downloading https://files.pythonhosted.org/packages/e9/da/824b92d9942f4e472702488857914bdd50f73021efea15b4cad9aca8ecef/click_plugins-1.1.1-py2.py3-none-any.whl
Collecting cligj>=0.5
  Downloading https://files.pythonhosted.org/packages/e4/be/30a58b4b0733850280d01f8bd132591b4668ed5c7046761098d665ac2174/cligj-0.5.0-py3-none-any.whl
Installing collected packages: pyproj, munch, click-plugins, cligj, fiona, geopandas
Successfully installed click-plugins-1.1.1 cligj-0.5.0 fiona-1.8.13.post1 geopandas-0.8.0 munch-2.5.0 pyproj-2.6.1.post1
Collecting dataplay
  Downloading https://files.pythonhosted.org/packages/36/a1/ca1b0db9be0194aa3cef96087e73e0bbc5348edb7c31f1aaccc42f3e66e2/dataplay-0.0.5-py3-none-any.whl
Installing collected packages: dataplay
Successfully installed dataplay-0.0.5
{% endraw %} {% raw %}
# @title Run: Install Modules
{% endraw %} {% raw %}
{% endraw %}

(Optional) Local File Access

Nothing we havent already seen.

Retrieve Datasets

Our example will merge two simple datasets; pulling CSA names using tract ID's.

The First dataset will be obtained from the Census' ACS 5-year serveys.

Functions used to obtain this data were obtained from Tutorial 0) ACS: Explore and Download.

The Second dataset will be obtained using using a CSV from a publicly accessible link

Get the Principal dataset.

We will use the function we created in our last tutorial to download the data!

{% raw %}
# Our download function will use Baltimore City's tract, county and state as internal paramters
# Change these values in the cell below using different geographic reference codes will change those parameters
tract = '*'
county = '510'
state = '24'

# Specify the download parameters the function will receieve here
tableId = 'B19001'
year = '17'
saveAcs = False
{% endraw %} {% raw %}
df = retrieve_acs_data(state, county, tract, tableId, year, saveAcs)
df.head()
Number of Columns 17
B19001_001E_Total B19001_002E_Total_Less_than_$10_000 B19001_003E_Total_$10_000_to_$14_999 B19001_004E_Total_$15_000_to_$19_999 B19001_005E_Total_$20_000_to_$24_999 B19001_006E_Total_$25_000_to_$29_999 B19001_007E_Total_$30_000_to_$34_999 B19001_008E_Total_$35_000_to_$39_999 B19001_009E_Total_$40_000_to_$44_999 B19001_010E_Total_$45_000_to_$49_999 B19001_011E_Total_$50_000_to_$59_999 B19001_012E_Total_$60_000_to_$74_999 B19001_013E_Total_$75_000_to_$99_999 B19001_014E_Total_$100_000_to_$124_999 B19001_015E_Total_$125_000_to_$149_999 B19001_016E_Total_$150_000_to_$199_999 B19001_017E_Total_$200_000_or_more state county tract
NAME
Census Tract 1901 796 237 76 85 38 79 43 36 35 15 43 45 39 5 0 6 14 24 510 190100
Census Tract 1902 695 63 87 93 6 58 30 14 29 23 38 113 70 6 32 11 22 24 510 190200
Census Tract 2201 2208 137 229 124 52 78 87 50 80 13 217 66 159 205 167 146 398 24 510 220100
Census Tract 2303 632 3 20 0 39 7 0 29 8 9 44 29 98 111 63 94 78 24 510 230300
Census Tract 2502.07 836 102 28 101 64 104 76 41 40 47 72 28 60 19 27 15 12 24 510 250207
{% endraw %}

Get the Secondary Dataset

Spatial data can be attained by using the 2010 Census Tract Shapefile Picking Tool or search their website for Tiger/Line Shapefiles

The core TIGER/Line Files and Shapefiles do not include demographic data, but they do contain geographic entity codes (GEOIDs) that can be linked to the Census Bureau’s demographic data, available on data.census.gov.-census.gov

{% raw %}
print('Boundaries Example: https://docs.google.com/spreadsheets/d/e/2PACX-1vQ8xXdUaT17jkdK0MWTJpg3GOy6jMWeaXTlguXNjCSb8Vr_FanSZQRaTU-m811fQz4kyMFK5wcahMNY/pub?gid=886223646&single=true&output=csv')
Boundaries Example: https://docs.google.com/spreadsheets/d/e/2PACX-1vQ8xXdUaT17jkdK0MWTJpg3GOy6jMWeaXTlguXNjCSb8Vr_FanSZQRaTU-m811fQz4kyMFK5wcahMNY/pub?gid=886223646&single=true&output=csv
{% endraw %} {% raw %}
# Get the Second dataset. 
# Our Example dataset contains Polygon Geometry information. 
# We want to merge this over to our principle dataset. 
# we will grab it by matching on either CSA or Tract

# The url listed below is public.

print('Tract 2 CSA Crosswalk : https://docs.google.com/spreadsheets/d/e/2PACX-1vREwwa_s8Ix39OYGnnS_wA8flOoEkU7reIV4o3ZhlwYhLXhpNEvnOia_uHUDBvnFptkLLHHlaQNvsQE/pub?output=csv')

inFile = input("\n Please enter the location of your file : \n" )

crosswalk = pd.read_csv( inFile )
crosswalk.head()
Tract 2 CSA Crosswalk : https://docs.google.com/spreadsheets/d/e/2PACX-1vREwwa_s8Ix39OYGnnS_wA8flOoEkU7reIV4o3ZhlwYhLXhpNEvnOia_uHUDBvnFptkLLHHlaQNvsQE/pub?output=csv

 Please enter the location of your file : 
https://docs.google.com/spreadsheets/d/e/2PACX-1vREwwa_s8Ix39OYGnnS_wA8flOoEkU7reIV4o3ZhlwYhLXhpNEvnOia_uHUDBvnFptkLLHHlaQNvsQE/pub?output=csv
TRACT2010 GEOID2010 CSA2010
0 10100 24510010100 Canton
1 10200 24510010200 Patterson Park N...
2 10300 24510010300 Canton
3 10400 24510010400 Canton
4 10500 24510010500 Fells Point
{% endraw %}

Perform Merge & Save

The following picture does nothing important but serves as a friendly reminder of the 4 basic join types.

  • Left - returns all left records, only includes the right record if it has a match
  • Right - Returns all right records, only includes the left record if it has a match
  • Full - Returns all records regardless of keys matching
  • Inner - Returns only records where a key match

Get Columns from both datasets to match on

You can get these values from the column values above.

Our Examples will work with the prompted values

{% raw %}
print( 'Princpal Columns ' + str(df.columns) + '')
left_on = input("Left on principal column: ('tract') \n" )
print(' \n ');
print( 'Crosswalk Columns ' + str(crosswalk.columns) + '')
right_on = input("Right on crosswalk column: ('TRACT2010') \n" )
Princpal Columns Index(['B19001_001E_Total', 'B19001_002E_Total_Less_than_$10_000',
       'B19001_003E_Total_$10_000_to_$14_999',
       'B19001_004E_Total_$15_000_to_$19_999',
       'B19001_005E_Total_$20_000_to_$24_999',
       'B19001_006E_Total_$25_000_to_$29_999',
       'B19001_007E_Total_$30_000_to_$34_999',
       'B19001_008E_Total_$35_000_to_$39_999',
       'B19001_009E_Total_$40_000_to_$44_999',
       'B19001_010E_Total_$45_000_to_$49_999',
       'B19001_011E_Total_$50_000_to_$59_999',
       'B19001_012E_Total_$60_000_to_$74_999',
       'B19001_013E_Total_$75_000_to_$99_999',
       'B19001_014E_Total_$100_000_to_$124_999',
       'B19001_015E_Total_$125_000_to_$149_999',
       'B19001_016E_Total_$150_000_to_$199_999',
       'B19001_017E_Total_$200_000_or_more', 'state', 'county', 'tract'],
      dtype='object')
Left on principal column: ('tract') 
tract
 
 
Crosswalk Columns Index(['TRACT2010', 'GEOID2010', 'CSA2010'], dtype='object')
Right on crosswalk column: ('TRACT2010') 
TRACT2010
{% endraw %}

Specify how the merge will be performed

We will perform a left merge in this example.

It will return our Principal dataset with columns from the second dataset appended to records where their specified columns match.

{% raw %}
how = input("How: (‘left’, ‘right’, ‘outer’, ‘inner’) " )
How: (‘left’, ‘right’, ‘outer’, ‘inner’) left
{% endraw %}

Actually perfrom the merge

{% raw %}
merged_df = pd.merge(df, crosswalk, left_on=left_on, right_on=right_on, how=how)
merged_df = merged_df.drop(left_on, axis=1)
merged_df.head()
B19001_001E_Total B19001_002E_Total_Less_than_$10_000 B19001_003E_Total_$10_000_to_$14_999 B19001_004E_Total_$15_000_to_$19_999 B19001_005E_Total_$20_000_to_$24_999 B19001_006E_Total_$25_000_to_$29_999 B19001_007E_Total_$30_000_to_$34_999 B19001_008E_Total_$35_000_to_$39_999 B19001_009E_Total_$40_000_to_$44_999 B19001_010E_Total_$45_000_to_$49_999 B19001_011E_Total_$50_000_to_$59_999 B19001_012E_Total_$60_000_to_$74_999 B19001_013E_Total_$75_000_to_$99_999 B19001_014E_Total_$100_000_to_$124_999 B19001_015E_Total_$125_000_to_$149_999 B19001_016E_Total_$150_000_to_$199_999 B19001_017E_Total_$200_000_or_more state county TRACT2010 GEOID2010 CSA2010
0 796 237 76 85 38 79 43 36 35 15 43 45 39 5 0 6 14 24 510 190100 24510190100 Southwest Baltimore
1 695 63 87 93 6 58 30 14 29 23 38 113 70 6 32 11 22 24 510 190200 24510190200 Southwest Baltimore
2 2208 137 229 124 52 78 87 50 80 13 217 66 159 205 167 146 398 24 510 220100 24510220100 Inner Harbor/Fed...
3 632 3 20 0 39 7 0 29 8 9 44 29 98 111 63 94 78 24 510 230300 24510230300 South Baltimore
4 836 102 28 101 64 104 76 41 40 47 72 28 60 19 27 15 12 24 510 250207 24510250207 Cherry Hill
{% endraw %}

As you can see, our Census data will now have a CSA appended to it.

{% raw %}
# Save Data to User Specified File
outFile = input("Please enter the new Filename to save the data to ('acs_csa_merge_test': " )
merged_df.to_csv(outFile+'.csv', quoting=csv.QUOTE_ALL) 
Please enter the new Filename to save the data to ('acs_csa_merge_test': test123
{% endraw %}

Final Result

{% raw %}
flag = input("Enter a URL? If not ACS data will be used. (Y/N):  " )
if (flag == 'y' or flag == 'Y'):
  df = pd.read_csv( input("Please enter the location of your Principal file: " ) )
else:
  tract = input("Please enter tract id (*): " )
  county = input("Please enter county id (510): " )
  state = input("Please enter state id (24): " )
  tableId = input("Please enter acs table id (B19001): " ) 
  year = input("Please enter acs year (18): " )
  saveAcs = input("Save ACS? (Y/N): " )
  df = retrieve_acs_data(state, county, tract, tableId, year, saveAcs)

print( 'Principal Columns ' + str(df.columns))

print('Crosswalk Example: https://docs.google.com/spreadsheets/d/e/2PACX-1vREwwa_s8Ix39OYGnnS_wA8flOoEkU7reIV4o3ZhlwYhLXhpNEvnOia_uHUDBvnFptkLLHHlaQNvsQE/pub?output=csv')

crosswalk = pd.read_csv( input("Please enter the location of your crosswalk file: " ) )
print( 'Crosswalk Columns ' + str(crosswalk.columns) + '\n')

left_on = input("Left on: " )
right_on = input("Right on: " )
how = input("How: (‘left’, ‘right’, ‘outer’, ‘inner’) " )

merged_df = pd.merge(df, crosswalk, left_on=left_on, right_on=right_on, how=how)
merged_df = merged_df.drop(left_on, axis=1)

# Save the data
# Save the data
saveFile = input("Save File ('Y' or 'N'): ")
if saveFile == 'Y' or saveFile == 'y':
  outFile = input("Saved Filename (Do not include the file extension ): ")
  merged_df.to_csv(outFile+'.csv', quoting=csv.QUOTE_ALL);
{% endraw %} {% raw %}
merged_df
(               Bank Name          Address(es)  Census Tract  GEOID2010  TRACTCE10      GEOID10   NAME10  CSA  Tract             geometry
 0    Arundel Federal ...  333 E. Patapsco ...      250401.0   2.45e+10     250401  24510250401  2504.01  NaN   2504  POLYGON ((-76.59...
 1                    NaN    3601 S Hanover St      250401.0   2.45e+10     250401  24510250401  2504.01  NaN   2504  POLYGON ((-76.59...
 2    Bank of America,...       20 N Howard St       40100.0   2.45e+10      40100  24510040100   401.00  NaN    401  POLYGON ((-76.61...
 3                    NaN  100 S Charles St...       40100.0   2.45e+10      40100  24510040100   401.00  NaN    401  POLYGON ((-76.61...
 4    Branch Banking a...       2 N CHARLES ST       40100.0   2.45e+10      40100  24510040100   401.00  NaN    401  POLYGON ((-76.61...
 ..                   ...                  ...           ...        ...        ...          ...      ...  ...    ...                  ...
 233                  NaN                  NaN           NaN        NaN     260403  24510260403  2604.03  NaN   2604  POLYGON ((-76.52...
 234                  NaN                  NaN           NaN        NaN      80800  24510080800   808.00  NaN    808  POLYGON ((-76.58...
 235                  NaN                  NaN           NaN        NaN     160500  24510160500  1605.00  NaN   1605  POLYGON ((-76.65...
 236                  NaN                  NaN           NaN        NaN      90100  24510090100   901.00  NaN    901  POLYGON ((-76.60...
 237                  NaN                  NaN           NaN        NaN     260402  24510260402  2604.02  NaN   2604  POLYGON ((-76.54...
 
 [238 rows x 10 columns],
      TRACTCE10      GEOID10   NAME10                  CSA  Tract             geometry
 0       151000  24510151000  1510.00  Dorchester/Ashbu...   1510  POLYGON ((-76.67...
 1        80700  24510080700   807.00      Greenmount East    807  POLYGON ((-76.58...
 2        80500  24510080500   805.00        Clifton-Berea    805  POLYGON ((-76.58...
 3       150500  24510150500  1505.00    Greater Mondawmin   1505  POLYGON ((-76.65...
 4       120100  24510120100  1201.00  North Baltimore/...   1201  POLYGON ((-76.60...
 ..         ...          ...      ...                  ...    ...                  ...
 195      80800  24510080800   808.00  Oldtown/Middle East    808  POLYGON ((-76.58...
 196     160500  24510160500  1605.00     Greater Rosemont   1605  POLYGON ((-76.65...
 197      90100  24510090100   901.00       Greater Govans    901  POLYGON ((-76.60...
 198     260402  24510260402  2604.02  Claremont/Armistead   2604  POLYGON ((-76.54...
 199     260700  24510260700  2607.00  Orangeville/East...   2607  POLYGON ((-76.55...
 
 [200 rows x 6 columns])
{% endraw %}

Advanced

Intro

The following Python function is a bulked out version of the previous notes.

  • It contains everything from the tutorial plus more.
  • It can be imported and used in future projects or stand alone.

Description: add columns of data from a foreign dataset into a primary dataset along set parameters.

Purpose: Makes Merging datasets simple

Services

  • Merge two datasets without a crosswalk
  • Merge two datasets with a crosswalk
{% raw %}
merged_df
(               Bank Name          Address(es)  Census Tract  GEOID2010  TRACTCE10      GEOID10   NAME10  CSA  Tract             geometry
 0    Arundel Federal ...  333 E. Patapsco ...      250401.0   2.45e+10     250401  24510250401  2504.01  NaN   2504  POLYGON ((-76.59...
 1                    NaN    3601 S Hanover St      250401.0   2.45e+10     250401  24510250401  2504.01  NaN   2504  POLYGON ((-76.59...
 2    Bank of America,...       20 N Howard St       40100.0   2.45e+10      40100  24510040100   401.00  NaN    401  POLYGON ((-76.61...
 3                    NaN  100 S Charles St...       40100.0   2.45e+10      40100  24510040100   401.00  NaN    401  POLYGON ((-76.61...
 4    Branch Banking a...       2 N CHARLES ST       40100.0   2.45e+10      40100  24510040100   401.00  NaN    401  POLYGON ((-76.61...
 ..                   ...                  ...           ...        ...        ...          ...      ...  ...    ...                  ...
 233                  NaN                  NaN           NaN        NaN     260403  24510260403  2604.03  NaN   2604  POLYGON ((-76.52...
 234                  NaN                  NaN           NaN        NaN      80800  24510080800   808.00  NaN    808  POLYGON ((-76.58...
 235                  NaN                  NaN           NaN        NaN     160500  24510160500  1605.00  NaN   1605  POLYGON ((-76.65...
 236                  NaN                  NaN           NaN        NaN      90100  24510090100   901.00  NaN    901  POLYGON ((-76.60...
 237                  NaN                  NaN           NaN        NaN     260402  24510260402  2604.02  NaN   2604  POLYGON ((-76.54...
 
 [238 rows x 10 columns],
      TRACTCE10      GEOID10   NAME10                  CSA  Tract             geometry
 0       151000  24510151000  1510.00  Dorchester/Ashbu...   1510  POLYGON ((-76.67...
 1        80700  24510080700   807.00      Greenmount East    807  POLYGON ((-76.58...
 2        80500  24510080500   805.00        Clifton-Berea    805  POLYGON ((-76.58...
 3       150500  24510150500  1505.00    Greater Mondawmin   1505  POLYGON ((-76.65...
 4       120100  24510120100  1201.00  North Baltimore/...   1201  POLYGON ((-76.60...
 ..         ...          ...      ...                  ...    ...                  ...
 195      80800  24510080800   808.00  Oldtown/Middle East    808  POLYGON ((-76.58...
 196     160500  24510160500  1605.00     Greater Rosemont   1605  POLYGON ((-76.65...
 197      90100  24510090100   901.00       Greater Govans    901  POLYGON ((-76.60...
 198     260402  24510260402  2604.02  Claremont/Armistead   2604  POLYGON ((-76.54...
 199     260700  24510260700  2607.00  Orangeville/East...   2607  POLYGON ((-76.55...
 
 [200 rows x 6 columns])
{% endraw %} {% raw %}
{% endraw %} {% raw %}

mergeDatasets[source]

mergeDatasets(left_ds=False, right_ds=False, crosswalk_ds=False, use_crosswalk=True, left_col=False, right_col=False, crosswalk_left_col=False, crosswalk_right_col=False, merge_how=False, interactive=True)

{% endraw %}

Function Explanation

Input(s):

  • Dataset url
  • Crosswalk Url
  • Right On
  • Left On
  • How
  • New Filename

Output: File

How it works:

  • Read in datasets
  • Perform Merge

  • If the 'how' parameter is equal to ['left', 'right', 'outer', 'inner']

    • then a merge will be performed.
  • If a column name is provided in the 'how' parameter
    • then that single column will be pulled from the right dataset as a new column in the left_ds.

Function Diagrams

Diagram the mergeDatasets()

{% raw %}
%%html
<img src="https://charleskarpati.com/images/class_diagram_merge_datasets.png">
{% endraw %}

mergeDatasets Flow Chart

{% raw %}
%%html
<img src="https://charleskarpati.com/images/flow_chart_merge_datasets.png">
{% endraw %}

Gannt Chart mergeDatasets()

{% raw %}
%%html
<img src="https://charleskarpati.com/images/gannt_chart_merge_datasets.png">
{% endraw %}

Sequence Diagram mergeDatasets()

{% raw %}
%%html
<img src="https://charleskarpati.com/images/sequence_diagram_merge_datasets.png">
{% endraw %}

Function Examples

Interactive Example 1

{% raw %}
# Table: FDIC Baltimore Banks
# Columns: Bank Name, Address(es), Census Tract
left_ds = 'https://docs.google.com/spreadsheets/d/e/2PACX-1vTViIZu-hbvhM3L7dIRAG95ISa7TNhUwdzlYxYzc1ygJoaYc3_scaXHe8Rtj5iwNA/pub?gid=1078028768&single=true&output=csv'
left_col = 'Census Tract'

# Table: Crosswalk Census Communities
# 'TRACT2010', 'GEOID2010', 'CSA2010'
right_ds = 'https://docs.google.com/spreadsheets/d/e/2PACX-1vREwwa_s8Ix39OYGnnS_wA8flOoEkU7reIV4o3ZhlwYhLXhpNEvnOia_uHUDBvnFptkLLHHlaQNvsQE/pub?output=csv'
right_col='TRACT2010'

merge_how = 'outer'
interactive = True
use_crosswalk = False

merged_df = mergeDatasets( left_ds=left_ds, left_col=left_col, 
              right_ds=right_ds, right_col=right_col, 
              merge_how='left', interactive =True, use_crosswalk=use_crosswalk )
 Handling Left Dataset
retrieveDatasetFromUrl https://docs.google.com/spreadsheets/d/e/2PACX-1vTViIZu-hbvhM3L7dIRAG95ISa7TNhUwdzlYxYzc1ygJoaYc3_scaXHe8Rtj5iwNA/pub?gid=1078028768&single=true&output=csv
checkDataSetExists False
retrieveDatasetFromUrl https://docs.google.com/spreadsheets/d/e/2PACX-1vTViIZu-hbvhM3L7dIRAG95ISa7TNhUwdzlYxYzc1ygJoaYc3_scaXHe8Rtj5iwNA/pub?gid=1078028768&single=true&output=csv
checkDataSetExists True
checkDataSetExists True
checkDataSetExists True
Left Dataset and Columns are Valid

 Handling Right Dataset
retrieveDatasetFromUrl https://docs.google.com/spreadsheets/d/e/2PACX-1vREwwa_s8Ix39OYGnnS_wA8flOoEkU7reIV4o3ZhlwYhLXhpNEvnOia_uHUDBvnFptkLLHHlaQNvsQE/pub?output=csv
checkDataSetExists False
retrieveDatasetFromUrl https://docs.google.com/spreadsheets/d/e/2PACX-1vREwwa_s8Ix39OYGnnS_wA8flOoEkU7reIV4o3ZhlwYhLXhpNEvnOia_uHUDBvnFptkLLHHlaQNvsQE/pub?output=csv
checkDataSetExists True
checkDataSetExists True
checkDataSetExists True
Right Dataset and Columns are Valid

 Checking the merge_how Parameter
merge_how operator is Valid left
checkDataSetExists False

 Ensuring Left->Right compatability
Converting Local Key from float64 to Int
PERFORMING MERGE LEFT->RIGHT
left_col Census Tract right_col TRACT2010 how left

 Local Column Values Not Matched 
[-1321321321321325            400100            401101            401102
            401507            403401            403500            403803
            411306            411406            411408            420100
            420301            420701            420800            430800
            430900            440100            440200            440702
            441101            450300            452000            490601
            490602            491100            491201            491600
            492300            750101]
43

 Crosswalk Unique Column Values
[ 10100  10200  10300  10400  10500  20100  20200  20300  30100  30200
  40100  40200  60100  60200  60300  60400  70100  70200  70300  70400
  80101  80102  80200  80301  80302  80400  80500  80600  80700  80800
  90100  90200  90300  90400  90500  90600  90700  90800  90900 100100
 100200 100300 110100 110200 120100 120201 120202 120300 120400 120500
 120600 120700 130100 130200 130300 130400 130600 130700 130803 130804
 130805 130806 140100 140200 140300 150100 150200 150300 150400 150500
 150600 150701 150702 150800 150900 151000 151100 151200 151300 160100
 160200 160300 160400 160500 160600 160700 160801 160802 170100 170200
 170300 180100 180200 180300 190100 190200 190300 200100 200200 200300
 200400 200500 200600 200701 200702 200800 210100 210200 220100 230100
 230200 230300 240100 240200 240300 240400 250101 250102 250103 250203
 250204 250205 250206 250207 250301 250303 250401 250402 250500 250600
 260101 260102 260201 260202 260203 260301 260302 260303 260401 260402
 260403 260404 260501 260604 260605 260700 260800 260900 261000 261100
 270101 270102 270200 270301 270302 270401 270402 270501 270502 270600
 270701 270702 270703 270801 270802 270803 270804 270805 270901 270902
 270903 271001 271002 271101 271102 271200 271300 271400 271501 271503
 271600 271700 271801 271802 271900 272003 272004 272005 272006 272007
 280101 280102 280200 280301 280302 280401 280402 280403 280404 280500
  10000]
{% endraw %} {% raw %}
merged_df.head()
Bank Name Address(es) Census Tract TRACT2010 GEOID2010 CSA2010
0 Arundel Federal ... 333 E. Patapsco ... 250401 250401.0 2.45e+10 Brooklyn/Curtis ...
1 Bank of America,... 20 N Howard St 40100 40100.0 2.45e+10 Downtown/Seton Hill
3 NaN 100 S Charles St... 40100 40100.0 2.45e+10 Downtown/Seton Hill
4 NaN 1046 Light St 230200 230200.0 2.45e+10 Inner Harbor/Fed...
5 NaN 1800 E Monument St 70400 70400.0 2.45e+10 Oldtown/Middle East
{% endraw %}

Example 1.25 ) Do it again but to a boundary using the previous dataframe

{% raw %}
left_col = 'GEOID2010'

right_ds = 'https://docs.google.com/spreadsheets/d/e/2PACX-1vQ8xXdUaT17jkdK0MWTJpg3GOy6jMWeaXTlguXNjCSb8Vr_FanSZQRaTU-m811fQz4kyMFK5wcahMNY/pub?gid=886223646&single=true&output=csv'
right_col ='GEOID10'


merged_df_geom = mergeDatasets( left_ds=merged_df, left_col=left_col, 
              use_crosswalk=False, crosswalk_ds=False,
              crosswalk_left_col = crosswalk_left_col, crosswalk_right_col = crosswalk_right_col,
              right_ds=right_ds, right_col=right_col, 
              merge_how='outer', interactive = True )
 Handling Left Dataset
retrieveDatasetFromUrl                Bank Name          Address(es)  Census Tract  TRACT2010  GEOID2010              CSA2010
0    Arundel Federal ...  333 E. Patapsco ...        250401   250401.0   2.45e+10  Brooklyn/Curtis ...
1    Bank of America,...       20 N Howard St         40100    40100.0   2.45e+10  Downtown/Seton Hill
3                    NaN  100 S Charles St...         40100    40100.0   2.45e+10  Downtown/Seton Hill
4                    NaN        1046 Light St        230200   230200.0   2.45e+10  Inner Harbor/Fed...
5                    NaN   1800 E Monument St         70400    70400.0   2.45e+10  Oldtown/Middle East
..                   ...                  ...           ...        ...        ...                  ...
126                  NaN      5121 ROLAND AVE        271300   271300.0   2.45e+10  Greater Roland P...
128                  NaN  1726 E NORTHERN ...        270803   270803.0   2.45e+10           Loch Raven
129                  NaN  4735 LIBERTY HEI...        280200   280200.0   2.45e+10  Howard Park/West...
132                  NaN  5701 REISTERSTOW...        271900   271900.0   2.45e+10       Glen-Fallstaff
133  Wilmington Trust...       1 Light Street         40100    40100.0   2.45e+10  Downtown/Seton Hill

[91 rows x 6 columns]
checkDataSetExists True
checkDataSetExists True
checkDataSetExists True
Left Dataset and Columns are Valid

 Handling Right Dataset
retrieveDatasetFromUrl https://docs.google.com/spreadsheets/d/e/2PACX-1vQ8xXdUaT17jkdK0MWTJpg3GOy6jMWeaXTlguXNjCSb8Vr_FanSZQRaTU-m811fQz4kyMFK5wcahMNY/pub?gid=886223646&single=true&output=csv
checkDataSetExists False
retrieveDatasetFromUrl https://docs.google.com/spreadsheets/d/e/2PACX-1vQ8xXdUaT17jkdK0MWTJpg3GOy6jMWeaXTlguXNjCSb8Vr_FanSZQRaTU-m811fQz4kyMFK5wcahMNY/pub?gid=886223646&single=true&output=csv
checkDataSetExists True
checkDataSetExists True
checkDataSetExists True
Right Dataset and Columns are Valid

 Checking the merge_how Parameter
merge_how operator is Valid outer
checkDataSetExists False

 Ensuring Left->Right compatability
Converting Local Key from float64 to Int
PERFORMING MERGE LEFT->RIGHT
left_col GEOID2010 right_col GEOID10 how outer
/usr/local/lib/python3.6/dist-packages/pandas/core/ops/array_ops.py:253: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
  res_values = method(rvalues)
{% endraw %} {% raw %}
merged_df_geom.head()
Bank Name Address(es) Census Tract TRACT2010 GEOID2010 CSA2010 TRACTCE10 GEOID10 NAME10 CSA Tract geometry
0 Arundel Federal ... 333 E. Patapsco ... 250401.0 250401.0 2.45e+10 Brooklyn/Curtis ... 250401 24510250401 2504.01 Brooklyn/Curtis ... 2504 POLYGON ((-76.59...
1 NaN 3601 S Hanover St 250401.0 250401.0 2.45e+10 Brooklyn/Curtis ... 250401 24510250401 2504.01 Brooklyn/Curtis ... 2504 POLYGON ((-76.59...
2 Bank of America,... 20 N Howard St 40100.0 40100.0 2.45e+10 Downtown/Seton Hill 40100 24510040100 401.00 Downtown/Seton Hill 401 POLYGON ((-76.61...
3 NaN 100 S Charles St... 40100.0 40100.0 2.45e+10 Downtown/Seton Hill 40100 24510040100 401.00 Downtown/Seton Hill 401 POLYGON ((-76.61...
4 Branch Banking a... 2 N CHARLES ST 40100.0 40100.0 2.45e+10 Downtown/Seton Hill 40100 24510040100 401.00 Downtown/Seton Hill 401 POLYGON ((-76.61...
{% endraw %}
{% raw %}
# Primary Table
# Description: I created a public dataset from a google xlsx sheet 'Bank Addresses and Census Tract' from a workbook of the same name.
# Table: FDIC Baltimore Banks
# Columns: Bank Name, Address(es), Census Tract
left_ds = 'https://docs.google.com/spreadsheets/d/e/2PACX-1vTViIZu-hbvhM3L7dIRAG95ISa7TNhUwdzlYxYzc1ygJoaYc3_scaXHe8Rtj5iwNA/pub?gid=1078028768&single=true&output=csv'
left_col = 'Census Tract'

# Alternate Primary Table
# Description: Same workbook, different Sheet: 'Branches per tract' 
# Columns: Census Tract, Number branches per tract
# left_ds = 'https://docs.google.com/spreadsheets/d/e/2PACX-1vSHFrRSHva1f82ZQ7Uxwf3A1phqljj1oa2duGlZDM1vLtrm1GI5yHmpVX2ilTfMHQ/pub?gid=1698745725&single=true&output=csv'
# lef_col = 'Number branches per tract'

# Crosswalk Table
# Table: Crosswalk Census Communities
# 'TRACT2010', 'GEOID2010', 'CSA2010'
crosswalk_ds = 'https://docs.google.com/spreadsheets/d/e/2PACX-1vREwwa_s8Ix39OYGnnS_wA8flOoEkU7reIV4o3ZhlwYhLXhpNEvnOia_uHUDBvnFptkLLHHlaQNvsQE/pub?output=csv'
use_crosswalk = True
crosswalk_left_col = 'TRACT2010'
crosswalk_right_col = 'GEOID2010'

# Secondary Table
# Table: Baltimore Boundaries
# 'TRACTCE10', 'GEOID10', 'CSA', 'NAME10', 'Tract', 'geometry'
right_ds = 'https://docs.google.com/spreadsheets/d/e/2PACX-1vQ8xXdUaT17jkdK0MWTJpg3GOy6jMWeaXTlguXNjCSb8Vr_FanSZQRaTU-m811fQz4kyMFK5wcahMNY/pub?gid=886223646&single=true&output=csv'
right_col ='GEOID10'

merge_how = 'geometry'
interactive = True
merge_how = 'outer'

merged_df_geom = mergeDatasets( left_ds=left_ds, left_col=left_col, 
              use_crosswalk=use_crosswalk, crosswalk_ds=crosswalk_ds,
              crosswalk_left_col = crosswalk_left_col, crosswalk_right_col = crosswalk_right_col,
              right_ds=right_ds, right_col=right_col, 
              merge_how=merge_how, interactive = interactive )

merged_df_geom.head()
 Handling Left Dataset
retrieveDatasetFromUrl https://docs.google.com/spreadsheets/d/e/2PACX-1vTViIZu-hbvhM3L7dIRAG95ISa7TNhUwdzlYxYzc1ygJoaYc3_scaXHe8Rtj5iwNA/pub?gid=1078028768&single=true&output=csv
checkDataSetExists False
retrieveDatasetFromUrl https://docs.google.com/spreadsheets/d/e/2PACX-1vTViIZu-hbvhM3L7dIRAG95ISa7TNhUwdzlYxYzc1ygJoaYc3_scaXHe8Rtj5iwNA/pub?gid=1078028768&single=true&output=csv
checkDataSetExists True
checkDataSetExists True
checkDataSetExists True
Left Dataset and Columns are Valid

 Handling Right Dataset
retrieveDatasetFromUrl https://docs.google.com/spreadsheets/d/e/2PACX-1vQ8xXdUaT17jkdK0MWTJpg3GOy6jMWeaXTlguXNjCSb8Vr_FanSZQRaTU-m811fQz4kyMFK5wcahMNY/pub?gid=886223646&single=true&output=csv
checkDataSetExists False
retrieveDatasetFromUrl https://docs.google.com/spreadsheets/d/e/2PACX-1vQ8xXdUaT17jkdK0MWTJpg3GOy6jMWeaXTlguXNjCSb8Vr_FanSZQRaTU-m811fQz4kyMFK5wcahMNY/pub?gid=886223646&single=true&output=csv
checkDataSetExists True
checkDataSetExists True
checkDataSetExists True
Right Dataset and Columns are Valid

 Checking the merge_how Parameter
merge_how operator is Valid outer
checkDataSetExists False

 Checking the Crosswalk Parameter

 Handling Crosswalk Left Dataset Loading
retrieveDatasetFromUrl https://docs.google.com/spreadsheets/d/e/2PACX-1vREwwa_s8Ix39OYGnnS_wA8flOoEkU7reIV4o3ZhlwYhLXhpNEvnOia_uHUDBvnFptkLLHHlaQNvsQE/pub?output=csv
checkDataSetExists False
retrieveDatasetFromUrl https://docs.google.com/spreadsheets/d/e/2PACX-1vREwwa_s8Ix39OYGnnS_wA8flOoEkU7reIV4o3ZhlwYhLXhpNEvnOia_uHUDBvnFptkLLHHlaQNvsQE/pub?output=csv
checkDataSetExists True
checkDataSetExists True
checkDataSetExists True

 Handling Crosswalk Right Dataset Loading
retrieveDatasetFromUrl https://docs.google.com/spreadsheets/d/e/2PACX-1vREwwa_s8Ix39OYGnnS_wA8flOoEkU7reIV4o3ZhlwYhLXhpNEvnOia_uHUDBvnFptkLLHHlaQNvsQE/pub?output=csv
checkDataSetExists False
retrieveDatasetFromUrl https://docs.google.com/spreadsheets/d/e/2PACX-1vREwwa_s8Ix39OYGnnS_wA8flOoEkU7reIV4o3ZhlwYhLXhpNEvnOia_uHUDBvnFptkLLHHlaQNvsQE/pub?output=csv
checkDataSetExists True
checkDataSetExists True
checkDataSetExists True

 Assessment Completed

 Ensuring Left->Crosswalk compatability
Converting Local Key from float64 to Int

 Ensuring Crosswalk->Right compatability
PERFORMING MERGE LEFT->CROSSWALK
left_on TRACT2010 right_on GEOID2010 how outer

 Local Column Values Not Matched 
[-1321321321321325            400100            401101            401102
            401507            403401            403500            403803
            411306            411406            411408            420100
            420301            420701            420800            430800
            430900            440100            440200            440702
            441101            450300            452000            490601
            490602            491100            491201            491600
            492300            750101]
43

 Crosswalk Unique Column Values
[ 10100  10200  10300  10400  10500  20100  20200  20300  30100  30200
  40100  40200  60100  60200  60300  60400  70100  70200  70300  70400
  80101  80102  80200  80301  80302  80400  80500  80600  80700  80800
  90100  90200  90300  90400  90500  90600  90700  90800  90900 100100
 100200 100300 110100 110200 120100 120201 120202 120300 120400 120500
 120600 120700 130100 130200 130300 130400 130600 130700 130803 130804
 130805 130806 140100 140200 140300 150100 150200 150300 150400 150500
 150600 150701 150702 150800 150900 151000 151100 151200 151300 160100
 160200 160300 160400 160500 160600 160700 160801 160802 170100 170200
 170300 180100 180200 180300 190100 190200 190300 200100 200200 200300
 200400 200500 200600 200701 200702 200800 210100 210200 220100 230100
 230200 230300 240100 240200 240300 240400 250101 250102 250103 250203
 250204 250205 250206 250207 250301 250303 250401 250402 250500 250600
 260101 260102 260201 260202 260203 260301 260302 260303 260401 260402
 260403 260404 260501 260604 260605 260700 260800 260900 261000 261100
 270101 270102 270200 270301 270302 270401 270402 270501 270502 270600
 270701 270702 270703 270801 270802 270803 270804 270805 270901 270902
 270903 271001 271002 271101 271102 271200 271300 271400 271501 271503
 271600 271700 271801 271802 271900 272003 272004 272005 272006 272007
 280101 280102 280200 280301 280302 280401 280402 280403 280404 280500
  10000]
PERFORMING MERGE LEFT->RIGHT
left_col GEOID2010 right_col GEOID10 how outer
/usr/local/lib/python3.6/dist-packages/pandas/core/ops/array_ops.py:253: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
  res_values = method(rvalues)
Bank Name Address(es) Census Tract GEOID2010 TRACTCE10 GEOID10 NAME10 CSA Tract geometry
0 Arundel Federal ... 333 E. Patapsco ... 250401.0 2.45e+10 250401 24510250401 2504.01 Brooklyn/Curtis ... 2504 POLYGON ((-76.59...
1 NaN 3601 S Hanover St 250401.0 2.45e+10 250401 24510250401 2504.01 Brooklyn/Curtis ... 2504 POLYGON ((-76.59...
2 Bank of America,... 20 N Howard St 40100.0 2.45e+10 40100 24510040100 401.00 Downtown/Seton Hill 401 POLYGON ((-76.61...
3 NaN 100 S Charles St... 40100.0 2.45e+10 40100 24510040100 401.00 Downtown/Seton Hill 401 POLYGON ((-76.61...
4 Branch Banking a... 2 N CHARLES ST 40100.0 2.45e+10 40100 24510040100 401.00 Downtown/Seton Hill 401 POLYGON ((-76.61...
{% endraw %}

Here we can save the data so that it may be used in later tutorials.

{% raw %}
string = 'test_save_data_with_geom_and_csa'
merged_df.to_csv(string+'.csv', encoding="utf-8", index=False, quoting=csv.QUOTE_ALL)
{% endraw %}

Download data by:

  • Clicking the 'Files' tab in the left hand menu of this screen. Locate your file within the file explorer that appears directly under the 'Files' tab button once clicked. Right click the file in the file explorer and select the 'download' option from the dropdown.

In the next tutorial you will learn how to load this data as a geospatial dataset so that it may be mapped and mapping functionalities may be applied to it.

You can upload this data into the next tutorial in one of two ways.

1)

  • uploading the saved file to google Drive and connecting to your drive path

OR.

2)

  • 'by first downloading the dataset as directed above, and then navigating to the next tutorial. Go to their page and:
  • Uploading data using an file 'upload' button accessible within the 'Files' tab in the left hand menu of this screen. The next tutorial will teach you how to load this data so that it may be mapped.

Interactive Example 2

{% raw %}
# When the prompts come up input the values not included from Interactive Example 1 and you will get the same output.
# This is to demonstrate that not all parameters must be known prior to executing the function.

mergeDatasets( left_ds=left_ds, left_col=left_col, right_ds=right_ds, interactive =True )
{% endraw %} {% raw %}
mergedDataset = mergeDatasets( left_ds=left_ds, left_col=left_col, use_crosswalk=use_crosswalk, right_ds=right_ds, right_col=right_col, merge_how = merge_how, interactive = interactive )
{% endraw %} {% raw %}
mergedDataset.dtypes
Bank Name        object
Address(es)      object
Census Tract    float64
TRACTCE10       float64
GEOID10         float64
NAME10          float64
CSA              object
Tract           float64
geometry         object
dtype: object
{% endraw %}

Interactive Run Alone

{% raw %}
mergeDatasets()
{% endraw %}

Preconfigured Example 1

{% raw %}
# Census Crosswalk
# 'TRACT2010', 'GEOID2010', 'CSA2010'
left_ds = 'https://docs.google.com/spreadsheets/d/e/2PACX-1vREwwa_s8Ix39OYGnnS_wA8flOoEkU7reIV4o3ZhlwYhLXhpNEvnOia_uHUDBvnFptkLLHHlaQNvsQE/pub?output=csv'

# Baltimore Boundaries
# 'TRACTCE10', 'GEOID10', 'CSA', 'NAME10', 'Tract', 'geometry'
right_ds = 'https://docs.google.com/spreadsheets/d/e/2PACX-1vQ8xXdUaT17jkdK0MWTJpg3GOy6jMWeaXTlguXNjCSb8Vr_FanSZQRaTU-m811fQz4kyMFK5wcahMNY/pub?gid=886223646&single=true&output=csv'
# The Left DS Cols will map to the first three Right DS Cols listed
left_col = 'GEOID2010'
right_col = 'GEOID10'
merge_how = 'outer'
interactive = True
{% endraw %} {% raw %}
mergeDatasets( left_ds=left_ds, left_col=left_col, right_ds=right_ds, right_col=right_col, merge_how = merge_how, interactive = interactive )
{% endraw %}