refactor: multi-dashboard structural migration
Some checks failed
CI / lint-and-test (pull_request) Has been cancelled

- Rename dbt project from toronto_housing to portfolio
- Restructure dbt models into domain subdirectories:
  - shared/ for cross-domain dimensions (dim_time)
  - staging/toronto/, intermediate/toronto/, marts/toronto/
- Update SQLAlchemy models for raw_toronto schema
- Add explicit cross-schema FK relationships for FactRentals
- Namespace figure factories under figures/toronto/
- Namespace notebooks under notebooks/toronto/
- Update Makefile with domain-specific targets and env loading
- Update all documentation for multi-dashboard structure

This enables adding new dashboard projects (e.g., /football, /energy)
without structural conflicts or naming collisions.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
2026-02-01 19:08:20 -05:00
parent a5d6866d63
commit 62d1a52eed
73 changed files with 1114 additions and 623 deletions

View File

@@ -1,17 +1,18 @@
# Toronto Neighbourhood Dashboard - Notebooks
# Dashboard Documentation Notebooks
Documentation notebooks for the Toronto Neighbourhood Dashboard visualizations. Each notebook documents how data is queried, transformed, and visualized using the figure factory pattern.
Documentation notebooks organized by dashboard project. Each notebook documents how data is queried, transformed, and visualized using the figure factory pattern.
## Directory Structure
```
notebooks/
├── README.md # This file
── overview/ # Overview tab visualizations
├── housing/ # Housing tab visualizations
├── safety/ # Safety tab visualizations
├── demographics/ # Demographics tab visualizations
└── amenities/ # Amenities tab visualizations
── toronto/ # Toronto Neighbourhood Dashboard
├── overview/ # Overview tab visualizations
├── housing/ # Housing tab visualizations
├── safety/ # Safety tab visualizations
├── demographics/ # Demographics tab visualizations
└── amenities/ # Amenities tab visualizations
```
## Notebook Template

View File

@@ -1,123 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Amenity Radar Chart\n",
"\n",
"Spider/radar chart comparing amenity categories for selected neighbourhoods."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Data Reference\n",
"\n",
"### Source Tables\n",
"\n",
"| Table | Grain | Key Columns |\n",
"|-------|-------|-------------|\n",
"| `mart_neighbourhood_amenities` | neighbourhood × year | parks_index, schools_index, transit_index |\n",
"\n",
"### SQL Query"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": "import pandas as pd\nfrom sqlalchemy import create_engine\nfrom dotenv import load_dotenv\nimport os\n\n# Load .env from project root\nload_dotenv('../../.env')\n\nengine = create_engine(os.environ['DATABASE_URL'])\n\nquery = \"\"\"\nSELECT\n neighbourhood_name,\n parks_index,\n schools_index,\n transit_index,\n amenity_index,\n amenity_tier\nFROM public_marts.mart_neighbourhood_amenities\nWHERE year = (SELECT MAX(year) FROM public_marts.mart_neighbourhood_amenities)\nORDER BY amenity_index DESC\n\"\"\"\n\ndf = pd.read_sql(query, engine)\nprint(f\"Loaded {len(df)} neighbourhoods\")"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Transformation Steps\n",
"\n",
"1. Select top 5 and bottom 5 neighbourhoods by amenity index\n",
"2. Reshape for radar chart format"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Select representative neighbourhoods\n",
"top_5 = df.head(5)\n",
"bottom_5 = df.tail(5)\n",
"\n",
"# Prepare radar data\n",
"categories = ['Parks', 'Schools', 'Transit']\n",
"index_columns = ['parks_index', 'schools_index', 'transit_index']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Sample Output"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"Top 5 Amenity-Rich Neighbourhoods:\")\n",
"display(top_5[['neighbourhood_name', 'parks_index', 'schools_index', 'transit_index', 'amenity_index']])\n",
"print(\"\\nBottom 5 Underserved Neighbourhoods:\")\n",
"display(bottom_5[['neighbourhood_name', 'parks_index', 'schools_index', 'transit_index', 'amenity_index']])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Data Visualization\n",
"\n",
"### Figure Factory\n",
"\n",
"Uses `create_radar` from `portfolio_app.figures.radar`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": "import sys\nsys.path.insert(0, '../..')\n\nfrom portfolio_app.figures.radar import create_comparison_radar\n\n# Compare top neighbourhood vs city average (100)\ntop_hood = top_5.iloc[0]\nmetrics = ['parks_index', 'schools_index', 'transit_index']\n\nfig = create_comparison_radar(\n selected_data=top_hood.to_dict(),\n average_data={'parks_index': 100, 'schools_index': 100, 'transit_index': 100},\n metrics=metrics,\n selected_name=top_hood['neighbourhood_name'],\n average_name='City Average',\n title=f\"Amenity Profile: {top_hood['neighbourhood_name']} vs City Average\",\n)\n\nfig.show()"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Index Interpretation\n",
"\n",
"| Value | Meaning |\n",
"|-------|--------|\n",
"| < 100 | Below city average |\n",
"| = 100 | City average |\n",
"| > 100 | Above city average |"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.11.0"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -19,7 +19,7 @@
"\n",
"| Table | Grain | Key Columns |\n",
"|-------|-------|-------------|\n",
"| `mart_neighbourhood_amenities` | neighbourhood \u00d7 year | amenity_index, total_amenities_per_1000, amenity_tier, geometry |\n",
"| `mart_neighbourhood_amenities` | neighbourhood × year | amenity_index, total_amenities_per_1000, amenity_tier, geometry |\n",
"\n",
"### SQL Query"
]
@@ -30,15 +30,16 @@
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"from sqlalchemy import create_engine\n",
"from dotenv import load_dotenv\n",
"import os\n",
"\n",
"# Load .env from project root\n",
"load_dotenv('../../.env')\n",
"import pandas as pd\n",
"from dotenv import load_dotenv\n",
"from sqlalchemy import create_engine\n",
"\n",
"engine = create_engine(os.environ['DATABASE_URL'])\n",
"# Load .env from project root\n",
"load_dotenv(\"../../.env\")\n",
"\n",
"engine = create_engine(os.environ[\"DATABASE_URL\"])\n",
"\n",
"query = \"\"\"\n",
"SELECT\n",
@@ -79,17 +80,16 @@
"metadata": {},
"outputs": [],
"source": [
"import geopandas as gpd\n",
"import json\n",
"\n",
"import geopandas as gpd\n",
"\n",
"gdf = gpd.GeoDataFrame(\n",
" df,\n",
" geometry=gpd.GeoSeries.from_wkb(df['geometry']),\n",
" crs='EPSG:4326'\n",
" df, geometry=gpd.GeoSeries.from_wkb(df[\"geometry\"]), crs=\"EPSG:4326\"\n",
")\n",
"\n",
"geojson = json.loads(gdf.to_json())\n",
"data = df.drop(columns=['geometry']).to_dict('records')"
"data = df.drop(columns=[\"geometry\"]).to_dict(\"records\")"
]
},
{
@@ -105,7 +105,9 @@
"metadata": {},
"outputs": [],
"source": [
"df[['neighbourhood_name', 'total_amenities_per_1000', 'amenity_index', 'amenity_tier']].head(10)"
"df[\n",
" [\"neighbourhood_name\", \"total_amenities_per_1000\", \"amenity_index\", \"amenity_tier\"]\n",
"].head(10)"
]
},
{
@@ -116,7 +118,7 @@
"\n",
"### Figure Factory\n",
"\n",
"Uses `create_choropleth_figure` from `portfolio_app.figures.choropleth`."
"Uses `create_choropleth_figure` from `portfolio_app.figures.toronto.choropleth`."
]
},
{
@@ -126,18 +128,24 @@
"outputs": [],
"source": [
"import sys\n",
"sys.path.insert(0, '../..')\n",
"\n",
"from portfolio_app.figures.choropleth import create_choropleth_figure\n",
"sys.path.insert(0, \"../..\")\n",
"\n",
"from portfolio_app.figures.toronto.choropleth import create_choropleth_figure\n",
"\n",
"fig = create_choropleth_figure(\n",
" geojson=geojson,\n",
" data=data,\n",
" location_key='neighbourhood_id',\n",
" color_column='total_amenities_per_1000',\n",
" hover_data=['neighbourhood_name', 'amenity_index', 'parks_per_1000', 'schools_per_1000'],\n",
" color_scale='Greens',\n",
" title='Toronto Amenities per 1,000 Population',\n",
" location_key=\"neighbourhood_id\",\n",
" color_column=\"total_amenities_per_1000\",\n",
" hover_data=[\n",
" \"neighbourhood_name\",\n",
" \"amenity_index\",\n",
" \"parks_per_1000\",\n",
" \"schools_per_1000\",\n",
" ],\n",
" color_scale=\"Greens\",\n",
" title=\"Toronto Amenities per 1,000 Population\",\n",
" zoom=10,\n",
")\n",
"\n",

View File

@@ -0,0 +1,191 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Amenity Radar Chart\n",
"\n",
"Spider/radar chart comparing amenity categories for selected neighbourhoods."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Data Reference\n",
"\n",
"### Source Tables\n",
"\n",
"| Table | Grain | Key Columns |\n",
"|-------|-------|-------------|\n",
"| `mart_neighbourhood_amenities` | neighbourhood × year | parks_index, schools_index, transit_index |\n",
"\n",
"### SQL Query"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"import pandas as pd\n",
"from dotenv import load_dotenv\n",
"from sqlalchemy import create_engine\n",
"\n",
"# Load .env from project root\n",
"load_dotenv(\"../../.env\")\n",
"\n",
"engine = create_engine(os.environ[\"DATABASE_URL\"])\n",
"\n",
"query = \"\"\"\n",
"SELECT\n",
" neighbourhood_name,\n",
" parks_index,\n",
" schools_index,\n",
" transit_index,\n",
" amenity_index,\n",
" amenity_tier\n",
"FROM public_marts.mart_neighbourhood_amenities\n",
"WHERE year = (SELECT MAX(year) FROM public_marts.mart_neighbourhood_amenities)\n",
"ORDER BY amenity_index DESC\n",
"\"\"\"\n",
"\n",
"df = pd.read_sql(query, engine)\n",
"print(f\"Loaded {len(df)} neighbourhoods\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Transformation Steps\n",
"\n",
"1. Select top 5 and bottom 5 neighbourhoods by amenity index\n",
"2. Reshape for radar chart format"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Select representative neighbourhoods\n",
"top_5 = df.head(5)\n",
"bottom_5 = df.tail(5)\n",
"\n",
"# Prepare radar data\n",
"categories = [\"Parks\", \"Schools\", \"Transit\"]\n",
"index_columns = [\"parks_index\", \"schools_index\", \"transit_index\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Sample Output"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"Top 5 Amenity-Rich Neighbourhoods:\")\n",
"display(\n",
" top_5[\n",
" [\n",
" \"neighbourhood_name\",\n",
" \"parks_index\",\n",
" \"schools_index\",\n",
" \"transit_index\",\n",
" \"amenity_index\",\n",
" ]\n",
" ]\n",
")\n",
"print(\"\\nBottom 5 Underserved Neighbourhoods:\")\n",
"display(\n",
" bottom_5[\n",
" [\n",
" \"neighbourhood_name\",\n",
" \"parks_index\",\n",
" \"schools_index\",\n",
" \"transit_index\",\n",
" \"amenity_index\",\n",
" ]\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Data Visualization\n",
"\n",
"### Figure Factory\n",
"\n",
"Uses `create_radar` from `portfolio_app.figures.toronto.radar`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import sys\n",
"\n",
"sys.path.insert(0, \"../..\")\n",
"\n",
"from portfolio_app.figures.toronto.radar import create_comparison_radar\n",
"\n",
"# Compare top neighbourhood vs city average (100)\n",
"top_hood = top_5.iloc[0]\n",
"metrics = [\"parks_index\", \"schools_index\", \"transit_index\"]\n",
"\n",
"fig = create_comparison_radar(\n",
" selected_data=top_hood.to_dict(),\n",
" average_data={\"parks_index\": 100, \"schools_index\": 100, \"transit_index\": 100},\n",
" metrics=metrics,\n",
" selected_name=top_hood[\"neighbourhood_name\"],\n",
" average_name=\"City Average\",\n",
" title=f\"Amenity Profile: {top_hood['neighbourhood_name']} vs City Average\",\n",
")\n",
"\n",
"fig.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Index Interpretation\n",
"\n",
"| Value | Meaning |\n",
"|-------|--------|\n",
"| < 100 | Below city average |\n",
"| = 100 | City average |\n",
"| > 100 | Above city average |"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.11.0"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -19,7 +19,7 @@
"\n",
"| Table | Grain | Key Columns |\n",
"|-------|-------|-------------|\n",
"| `mart_neighbourhood_amenities` | neighbourhood \u00d7 year | transit_per_1000, transit_index, transit_count |\n",
"| `mart_neighbourhood_amenities` | neighbourhood × year | transit_per_1000, transit_index, transit_count |\n",
"\n",
"### SQL Query"
]
@@ -30,15 +30,16 @@
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"from sqlalchemy import create_engine\n",
"from dotenv import load_dotenv\n",
"import os\n",
"\n",
"# Load .env from project root\n",
"load_dotenv('../../.env')\n",
"import pandas as pd\n",
"from dotenv import load_dotenv\n",
"from sqlalchemy import create_engine\n",
"\n",
"engine = create_engine(os.environ['DATABASE_URL'])\n",
"# Load .env from project root\n",
"load_dotenv(\"../../.env\")\n",
"\n",
"engine = create_engine(os.environ[\"DATABASE_URL\"])\n",
"\n",
"query = \"\"\"\n",
"SELECT\n",
@@ -74,7 +75,7 @@
"metadata": {},
"outputs": [],
"source": [
"data = df.head(20).to_dict('records')"
"data = df.head(20).to_dict(\"records\")"
]
},
{
@@ -90,7 +91,9 @@
"metadata": {},
"outputs": [],
"source": [
"df[['neighbourhood_name', 'transit_per_1000', 'transit_index', 'transit_count']].head(10)"
"df[[\"neighbourhood_name\", \"transit_per_1000\", \"transit_index\", \"transit_count\"]].head(\n",
" 10\n",
")"
]
},
{
@@ -101,7 +104,7 @@
"\n",
"### Figure Factory\n",
"\n",
"Uses `create_horizontal_bar` from `portfolio_app.figures.bar_charts`."
"Uses `create_horizontal_bar` from `portfolio_app.figures.toronto.bar_charts`."
]
},
{
@@ -111,17 +114,18 @@
"outputs": [],
"source": [
"import sys\n",
"sys.path.insert(0, '../..')\n",
"\n",
"from portfolio_app.figures.bar_charts import create_horizontal_bar\n",
"sys.path.insert(0, \"../..\")\n",
"\n",
"from portfolio_app.figures.toronto.bar_charts import create_horizontal_bar\n",
"\n",
"fig = create_horizontal_bar(\n",
" data=data,\n",
" name_column='neighbourhood_name',\n",
" value_column='transit_per_1000',\n",
" title='Top 20 Neighbourhoods by Transit Accessibility',\n",
" color='#00BCD4',\n",
" value_format='.2f',\n",
" name_column=\"neighbourhood_name\",\n",
" value_column=\"transit_per_1000\",\n",
" title=\"Top 20 Neighbourhoods by Transit Accessibility\",\n",
" color=\"#00BCD4\",\n",
" value_format=\".2f\",\n",
")\n",
"\n",
"fig.show()"
@@ -140,7 +144,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(f\"City-wide Transit Statistics:\")\n",
"print(\"City-wide Transit Statistics:\")\n",
"print(f\" Total Transit Stops: {df['transit_count'].sum():,.0f}\")\n",
"print(f\" Average per 1,000 pop: {df['transit_per_1000'].mean():.2f}\")\n",
"print(f\" Median per 1,000 pop: {df['transit_per_1000'].median():.2f}\")\n",

View File

@@ -19,7 +19,7 @@
"\n",
"| Table | Grain | Key Columns |\n",
"|-------|-------|-------------|\n",
"| `mart_neighbourhood_demographics` | neighbourhood \u00d7 year | median_age, age_index, city_avg_age |\n",
"| `mart_neighbourhood_demographics` | neighbourhood × year | median_age, age_index, city_avg_age |\n",
"\n",
"### SQL Query"
]
@@ -30,15 +30,16 @@
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"from sqlalchemy import create_engine\n",
"from dotenv import load_dotenv\n",
"import os\n",
"\n",
"# Load .env from project root\n",
"load_dotenv('../../.env')\n",
"import pandas as pd\n",
"from dotenv import load_dotenv\n",
"from sqlalchemy import create_engine\n",
"\n",
"engine = create_engine(os.environ['DATABASE_URL'])\n",
"# Load .env from project root\n",
"load_dotenv(\"../../.env\")\n",
"\n",
"engine = create_engine(os.environ[\"DATABASE_URL\"])\n",
"\n",
"query = \"\"\"\n",
"SELECT\n",
@@ -76,13 +77,13 @@
"metadata": {},
"outputs": [],
"source": [
"city_avg = df['city_avg_age'].iloc[0]\n",
"df['age_category'] = df['median_age'].apply(\n",
" lambda x: 'Younger' if x < city_avg else 'Older'\n",
"city_avg = df[\"city_avg_age\"].iloc[0]\n",
"df[\"age_category\"] = df[\"median_age\"].apply(\n",
" lambda x: \"Younger\" if x < city_avg else \"Older\"\n",
")\n",
"df['age_deviation'] = df['median_age'] - city_avg\n",
"df[\"age_deviation\"] = df[\"median_age\"] - city_avg\n",
"\n",
"data = df.to_dict('records')"
"data = df.to_dict(\"records\")"
]
},
{
@@ -100,9 +101,13 @@
"source": [
"print(f\"City Average Age: {city_avg:.1f}\")\n",
"print(\"\\nYoungest Neighbourhoods:\")\n",
"display(df.tail(5)[['neighbourhood_name', 'median_age', 'age_index', 'pct_renter_occupied']])\n",
"display(\n",
" df.tail(5)[[\"neighbourhood_name\", \"median_age\", \"age_index\", \"pct_renter_occupied\"]]\n",
")\n",
"print(\"\\nOldest Neighbourhoods:\")\n",
"display(df.head(5)[['neighbourhood_name', 'median_age', 'age_index', 'pct_renter_occupied']])"
"display(\n",
" df.head(5)[[\"neighbourhood_name\", \"median_age\", \"age_index\", \"pct_renter_occupied\"]]\n",
")"
]
},
{
@@ -113,7 +118,7 @@
"\n",
"### Figure Factory\n",
"\n",
"Uses `create_ranking_bar` from `portfolio_app.figures.bar_charts`."
"Uses `create_ranking_bar` from `portfolio_app.figures.toronto.bar_charts`."
]
},
{
@@ -123,20 +128,21 @@
"outputs": [],
"source": [
"import sys\n",
"sys.path.insert(0, '../..')\n",
"\n",
"from portfolio_app.figures.bar_charts import create_ranking_bar\n",
"sys.path.insert(0, \"../..\")\n",
"\n",
"from portfolio_app.figures.toronto.bar_charts import create_ranking_bar\n",
"\n",
"fig = create_ranking_bar(\n",
" data=data,\n",
" name_column='neighbourhood_name',\n",
" value_column='median_age',\n",
" title='Youngest & Oldest Neighbourhoods (Median Age)',\n",
" name_column=\"neighbourhood_name\",\n",
" value_column=\"median_age\",\n",
" title=\"Youngest & Oldest Neighbourhoods (Median Age)\",\n",
" top_n=10,\n",
" bottom_n=10,\n",
" color_top='#FF9800', # Orange for older\n",
" color_bottom='#2196F3', # Blue for younger\n",
" value_format='.1f',\n",
" color_top=\"#FF9800\", # Orange for older\n",
" color_bottom=\"#2196F3\", # Blue for younger\n",
" value_format=\".1f\",\n",
")\n",
"\n",
"fig.show()"
@@ -157,7 +163,7 @@
"source": [
"# Age by income quintile\n",
"print(\"Median Age by Income Quintile:\")\n",
"df.groupby('income_quintile')['median_age'].mean().round(1)"
"df.groupby(\"income_quintile\")[\"median_age\"].mean().round(1)"
]
}
],

View File

@@ -19,7 +19,7 @@
"\n",
"| Table | Grain | Key Columns |\n",
"|-------|-------|-------------|\n",
"| `mart_neighbourhood_demographics` | neighbourhood \u00d7 year | median_household_income, income_index, income_quintile, geometry |\n",
"| `mart_neighbourhood_demographics` | neighbourhood × year | median_household_income, income_index, income_quintile, geometry |\n",
"\n",
"### SQL Query"
]
@@ -30,15 +30,16 @@
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"from sqlalchemy import create_engine\n",
"from dotenv import load_dotenv\n",
"import os\n",
"\n",
"# Load .env from project root\n",
"load_dotenv('../../.env')\n",
"import pandas as pd\n",
"from dotenv import load_dotenv\n",
"from sqlalchemy import create_engine\n",
"\n",
"engine = create_engine(os.environ['DATABASE_URL'])\n",
"# Load .env from project root\n",
"load_dotenv(\"../../.env\")\n",
"\n",
"engine = create_engine(os.environ[\"DATABASE_URL\"])\n",
"\n",
"query = \"\"\"\n",
"SELECT\n",
@@ -77,19 +78,18 @@
"metadata": {},
"outputs": [],
"source": [
"import geopandas as gpd\n",
"import json\n",
"\n",
"df['income_thousands'] = df['median_household_income'] / 1000\n",
"import geopandas as gpd\n",
"\n",
"df[\"income_thousands\"] = df[\"median_household_income\"] / 1000\n",
"\n",
"gdf = gpd.GeoDataFrame(\n",
" df,\n",
" geometry=gpd.GeoSeries.from_wkb(df['geometry']),\n",
" crs='EPSG:4326'\n",
" df, geometry=gpd.GeoSeries.from_wkb(df[\"geometry\"]), crs=\"EPSG:4326\"\n",
")\n",
"\n",
"geojson = json.loads(gdf.to_json())\n",
"data = df.drop(columns=['geometry']).to_dict('records')"
"data = df.drop(columns=[\"geometry\"]).to_dict(\"records\")"
]
},
{
@@ -105,7 +105,9 @@
"metadata": {},
"outputs": [],
"source": [
"df[['neighbourhood_name', 'median_household_income', 'income_index', 'income_quintile']].head(10)"
"df[\n",
" [\"neighbourhood_name\", \"median_household_income\", \"income_index\", \"income_quintile\"]\n",
"].head(10)"
]
},
{
@@ -116,7 +118,7 @@
"\n",
"### Figure Factory\n",
"\n",
"Uses `create_choropleth_figure` from `portfolio_app.figures.choropleth`."
"Uses `create_choropleth_figure` from `portfolio_app.figures.toronto.choropleth`."
]
},
{
@@ -126,18 +128,19 @@
"outputs": [],
"source": [
"import sys\n",
"sys.path.insert(0, '../..')\n",
"\n",
"from portfolio_app.figures.choropleth import create_choropleth_figure\n",
"sys.path.insert(0, \"../..\")\n",
"\n",
"from portfolio_app.figures.toronto.choropleth import create_choropleth_figure\n",
"\n",
"fig = create_choropleth_figure(\n",
" geojson=geojson,\n",
" data=data,\n",
" location_key='neighbourhood_id',\n",
" color_column='median_household_income',\n",
" hover_data=['neighbourhood_name', 'income_index', 'income_quintile'],\n",
" color_scale='Viridis',\n",
" title='Toronto Median Household Income by Neighbourhood',\n",
" location_key=\"neighbourhood_id\",\n",
" color_column=\"median_household_income\",\n",
" hover_data=[\"neighbourhood_name\", \"income_index\", \"income_quintile\"],\n",
" color_scale=\"Viridis\",\n",
" title=\"Toronto Median Household Income by Neighbourhood\",\n",
" zoom=10,\n",
")\n",
"\n",
@@ -157,7 +160,9 @@
"metadata": {},
"outputs": [],
"source": [
"df.groupby('income_quintile')['median_household_income'].agg(['count', 'mean', 'min', 'max']).round(0)"
"df.groupby(\"income_quintile\")[\"median_household_income\"].agg(\n",
" [\"count\", \"mean\", \"min\", \"max\"]\n",
").round(0)"
]
}
],

View File

@@ -19,7 +19,7 @@
"\n",
"| Table | Grain | Key Columns |\n",
"|-------|-------|-------------|\n",
"| `mart_neighbourhood_demographics` | neighbourhood \u00d7 year | population_density, population, land_area_sqkm |\n",
"| `mart_neighbourhood_demographics` | neighbourhood × year | population_density, population, land_area_sqkm |\n",
"\n",
"### SQL Query"
]
@@ -30,15 +30,16 @@
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"from sqlalchemy import create_engine\n",
"from dotenv import load_dotenv\n",
"import os\n",
"\n",
"# Load .env from project root\n",
"load_dotenv('../../.env')\n",
"import pandas as pd\n",
"from dotenv import load_dotenv\n",
"from sqlalchemy import create_engine\n",
"\n",
"engine = create_engine(os.environ['DATABASE_URL'])\n",
"# Load .env from project root\n",
"load_dotenv(\"../../.env\")\n",
"\n",
"engine = create_engine(os.environ[\"DATABASE_URL\"])\n",
"\n",
"query = \"\"\"\n",
"SELECT\n",
@@ -74,7 +75,7 @@
"metadata": {},
"outputs": [],
"source": [
"data = df.head(20).to_dict('records')"
"data = df.head(20).to_dict(\"records\")"
]
},
{
@@ -90,7 +91,9 @@
"metadata": {},
"outputs": [],
"source": [
"df[['neighbourhood_name', 'population_density', 'population', 'land_area_sqkm']].head(10)"
"df[[\"neighbourhood_name\", \"population_density\", \"population\", \"land_area_sqkm\"]].head(\n",
" 10\n",
")"
]
},
{
@@ -101,7 +104,7 @@
"\n",
"### Figure Factory\n",
"\n",
"Uses `create_horizontal_bar` from `portfolio_app.figures.bar_charts`."
"Uses `create_horizontal_bar` from `portfolio_app.figures.toronto.bar_charts`."
]
},
{
@@ -111,17 +114,18 @@
"outputs": [],
"source": [
"import sys\n",
"sys.path.insert(0, '../..')\n",
"\n",
"from portfolio_app.figures.bar_charts import create_horizontal_bar\n",
"sys.path.insert(0, \"../..\")\n",
"\n",
"from portfolio_app.figures.toronto.bar_charts import create_horizontal_bar\n",
"\n",
"fig = create_horizontal_bar(\n",
" data=data,\n",
" name_column='neighbourhood_name',\n",
" value_column='population_density',\n",
" title='Top 20 Most Dense Neighbourhoods',\n",
" color='#9C27B0',\n",
" value_format=',.0f',\n",
" name_column=\"neighbourhood_name\",\n",
" value_column=\"population_density\",\n",
" title=\"Top 20 Most Dense Neighbourhoods\",\n",
" color=\"#9C27B0\",\n",
" value_format=\",.0f\",\n",
")\n",
"\n",
"fig.show()"
@@ -140,7 +144,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(f\"City-wide Statistics:\")\n",
"print(\"City-wide Statistics:\")\n",
"print(f\" Total Population: {df['population'].sum():,.0f}\")\n",
"print(f\" Total Area: {df['land_area_sqkm'].sum():,.1f} sq km\")\n",
"print(f\" Average Density: {df['population_density'].mean():,.0f} per sq km\")\n",

View File

@@ -19,7 +19,7 @@
"\n",
"| Table | Grain | Key Columns |\n",
"|-------|-------|-------------|\n",
"| `mart_neighbourhood_housing` | neighbourhood \u00d7 year | affordability_index, rent_to_income_pct, avg_rent_2bed, geometry |\n",
"| `mart_neighbourhood_housing` | neighbourhood × year | affordability_index, rent_to_income_pct, avg_rent_2bed, geometry |\n",
"\n",
"### SQL Query"
]
@@ -30,15 +30,16 @@
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"from sqlalchemy import create_engine\n",
"from dotenv import load_dotenv\n",
"import os\n",
"\n",
"# Load .env from project root\n",
"load_dotenv('../../.env')\n",
"import pandas as pd\n",
"from dotenv import load_dotenv\n",
"from sqlalchemy import create_engine\n",
"\n",
"engine = create_engine(os.environ['DATABASE_URL'])\n",
"# Load .env from project root\n",
"load_dotenv(\"../../.env\")\n",
"\n",
"engine = create_engine(os.environ[\"DATABASE_URL\"])\n",
"\n",
"query = \"\"\"\n",
"SELECT\n",
@@ -77,17 +78,16 @@
"metadata": {},
"outputs": [],
"source": [
"import geopandas as gpd\n",
"import json\n",
"\n",
"import geopandas as gpd\n",
"\n",
"gdf = gpd.GeoDataFrame(\n",
" df,\n",
" geometry=gpd.GeoSeries.from_wkb(df['geometry']),\n",
" crs='EPSG:4326'\n",
" df, geometry=gpd.GeoSeries.from_wkb(df[\"geometry\"]), crs=\"EPSG:4326\"\n",
")\n",
"\n",
"geojson = json.loads(gdf.to_json())\n",
"data = df.drop(columns=['geometry']).to_dict('records')"
"data = df.drop(columns=[\"geometry\"]).to_dict(\"records\")"
]
},
{
@@ -103,7 +103,15 @@
"metadata": {},
"outputs": [],
"source": [
"df[['neighbourhood_name', 'affordability_index', 'rent_to_income_pct', 'avg_rent_2bed', 'is_affordable']].head(10)"
"df[\n",
" [\n",
" \"neighbourhood_name\",\n",
" \"affordability_index\",\n",
" \"rent_to_income_pct\",\n",
" \"avg_rent_2bed\",\n",
" \"is_affordable\",\n",
" ]\n",
"].head(10)"
]
},
{
@@ -114,7 +122,7 @@
"\n",
"### Figure Factory\n",
"\n",
"Uses `create_choropleth_figure` from `portfolio_app.figures.choropleth`.\n",
"Uses `create_choropleth_figure` from `portfolio_app.figures.toronto.choropleth`.\n",
"\n",
"**Key Parameters:**\n",
"- `color_column`: 'affordability_index'\n",
@@ -128,18 +136,19 @@
"outputs": [],
"source": [
"import sys\n",
"sys.path.insert(0, '../..')\n",
"\n",
"from portfolio_app.figures.choropleth import create_choropleth_figure\n",
"sys.path.insert(0, \"../..\")\n",
"\n",
"from portfolio_app.figures.toronto.choropleth import create_choropleth_figure\n",
"\n",
"fig = create_choropleth_figure(\n",
" geojson=geojson,\n",
" data=data,\n",
" location_key='neighbourhood_id',\n",
" color_column='affordability_index',\n",
" hover_data=['neighbourhood_name', 'rent_to_income_pct', 'avg_rent_2bed'],\n",
" color_scale='RdYlGn_r', # Reversed: lower index (affordable) = green\n",
" title='Toronto Housing Affordability Index',\n",
" location_key=\"neighbourhood_id\",\n",
" color_column=\"affordability_index\",\n",
" hover_data=[\"neighbourhood_name\", \"rent_to_income_pct\", \"avg_rent_2bed\"],\n",
" color_scale=\"RdYlGn_r\", # Reversed: lower index (affordable) = green\n",
" title=\"Toronto Housing Affordability Index\",\n",
" zoom=10,\n",
")\n",
"\n",

View File

@@ -19,7 +19,7 @@
"\n",
"| Table | Grain | Key Columns |\n",
"|-------|-------|-------------|\n",
"| `mart_neighbourhood_housing` | neighbourhood \u00d7 year | year, avg_rent_2bed, rent_yoy_change_pct |\n",
"| `mart_neighbourhood_housing` | neighbourhood × year | year, avg_rent_2bed, rent_yoy_change_pct |\n",
"\n",
"### SQL Query"
]
@@ -30,15 +30,16 @@
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"from sqlalchemy import create_engine\n",
"from dotenv import load_dotenv\n",
"import os\n",
"\n",
"# Load .env from project root\n",
"load_dotenv('../../.env')\n",
"import pandas as pd\n",
"from dotenv import load_dotenv\n",
"from sqlalchemy import create_engine\n",
"\n",
"engine = create_engine(os.environ['DATABASE_URL'])\n",
"# Load .env from project root\n",
"load_dotenv(\"../../.env\")\n",
"\n",
"engine = create_engine(os.environ[\"DATABASE_URL\"])\n",
"\n",
"# City-wide average rent by year\n",
"query = \"\"\"\n",
@@ -77,23 +78,25 @@
"outputs": [],
"source": [
"# Create date column from year\n",
"df['date'] = pd.to_datetime(df['year'].astype(str) + '-01-01')\n",
"df[\"date\"] = pd.to_datetime(df[\"year\"].astype(str) + \"-01-01\")\n",
"\n",
"# Melt for multi-line chart\n",
"df_melted = df.melt(\n",
" id_vars=['year', 'date'],\n",
" value_vars=['avg_rent_bachelor', 'avg_rent_1bed', 'avg_rent_2bed', 'avg_rent_3bed'],\n",
" var_name='bedroom_type',\n",
" value_name='avg_rent'\n",
" id_vars=[\"year\", \"date\"],\n",
" value_vars=[\"avg_rent_bachelor\", \"avg_rent_1bed\", \"avg_rent_2bed\", \"avg_rent_3bed\"],\n",
" var_name=\"bedroom_type\",\n",
" value_name=\"avg_rent\",\n",
")\n",
"\n",
"# Clean labels\n",
"df_melted['bedroom_type'] = df_melted['bedroom_type'].map({\n",
" 'avg_rent_bachelor': 'Bachelor',\n",
" 'avg_rent_1bed': '1 Bedroom',\n",
" 'avg_rent_2bed': '2 Bedroom',\n",
" 'avg_rent_3bed': '3 Bedroom'\n",
"})"
"df_melted[\"bedroom_type\"] = df_melted[\"bedroom_type\"].map(\n",
" {\n",
" \"avg_rent_bachelor\": \"Bachelor\",\n",
" \"avg_rent_1bed\": \"1 Bedroom\",\n",
" \"avg_rent_2bed\": \"2 Bedroom\",\n",
" \"avg_rent_3bed\": \"3 Bedroom\",\n",
" }\n",
")"
]
},
{
@@ -109,7 +112,16 @@
"metadata": {},
"outputs": [],
"source": [
"df[['year', 'avg_rent_bachelor', 'avg_rent_1bed', 'avg_rent_2bed', 'avg_rent_3bed', 'avg_yoy_change']]"
"df[\n",
" [\n",
" \"year\",\n",
" \"avg_rent_bachelor\",\n",
" \"avg_rent_1bed\",\n",
" \"avg_rent_2bed\",\n",
" \"avg_rent_3bed\",\n",
" \"avg_yoy_change\",\n",
" ]\n",
"]"
]
},
{
@@ -120,7 +132,7 @@
"\n",
"### Figure Factory\n",
"\n",
"Uses `create_price_time_series` from `portfolio_app.figures.time_series`.\n",
"Uses `create_price_time_series` from `portfolio_app.figures.toronto.time_series`.\n",
"\n",
"**Key Parameters:**\n",
"- `date_column`: 'date'\n",
@@ -135,18 +147,19 @@
"outputs": [],
"source": [
"import sys\n",
"sys.path.insert(0, '../..')\n",
"\n",
"from portfolio_app.figures.time_series import create_price_time_series\n",
"sys.path.insert(0, \"../..\")\n",
"\n",
"data = df_melted.to_dict('records')\n",
"from portfolio_app.figures.toronto.time_series import create_price_time_series\n",
"\n",
"data = df_melted.to_dict(\"records\")\n",
"\n",
"fig = create_price_time_series(\n",
" data=data,\n",
" date_column='date',\n",
" price_column='avg_rent',\n",
" group_column='bedroom_type',\n",
" title='Toronto Average Rent Trend (5 Years)',\n",
" date_column=\"date\",\n",
" price_column=\"avg_rent\",\n",
" group_column=\"bedroom_type\",\n",
" title=\"Toronto Average Rent Trend (5 Years)\",\n",
")\n",
"\n",
"fig.show()"
@@ -167,7 +180,7 @@
"source": [
"# Show year-over-year changes\n",
"print(\"Year-over-Year Rent Change (%)\")\n",
"df[['year', 'avg_yoy_change']].dropna()"
"df[[\"year\", \"avg_yoy_change\"]].dropna()"
]
}
],

View File

@@ -19,7 +19,7 @@
"\n",
"| Table | Grain | Key Columns |\n",
"|-------|-------|-------------|\n",
"| `mart_neighbourhood_housing` | neighbourhood \u00d7 year | pct_owner_occupied, pct_renter_occupied, income_quintile |\n",
"| `mart_neighbourhood_housing` | neighbourhood × year | pct_owner_occupied, pct_renter_occupied, income_quintile |\n",
"\n",
"### SQL Query"
]
@@ -30,15 +30,16 @@
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"from sqlalchemy import create_engine\n",
"from dotenv import load_dotenv\n",
"import os\n",
"\n",
"# Load .env from project root\n",
"load_dotenv('../../.env')\n",
"import pandas as pd\n",
"from dotenv import load_dotenv\n",
"from sqlalchemy import create_engine\n",
"\n",
"engine = create_engine(os.environ['DATABASE_URL'])\n",
"# Load .env from project root\n",
"load_dotenv(\"../../.env\")\n",
"\n",
"engine = create_engine(os.environ[\"DATABASE_URL\"])\n",
"\n",
"query = \"\"\"\n",
"SELECT\n",
@@ -77,18 +78,17 @@
"source": [
"# Prepare for stacked bar\n",
"df_stacked = df.melt(\n",
" id_vars=['neighbourhood_name', 'income_quintile'],\n",
" value_vars=['pct_owner_occupied', 'pct_renter_occupied'],\n",
" var_name='tenure_type',\n",
" value_name='percentage'\n",
" id_vars=[\"neighbourhood_name\", \"income_quintile\"],\n",
" value_vars=[\"pct_owner_occupied\", \"pct_renter_occupied\"],\n",
" var_name=\"tenure_type\",\n",
" value_name=\"percentage\",\n",
")\n",
"\n",
"df_stacked['tenure_type'] = df_stacked['tenure_type'].map({\n",
" 'pct_owner_occupied': 'Owner',\n",
" 'pct_renter_occupied': 'Renter'\n",
"})\n",
"df_stacked[\"tenure_type\"] = df_stacked[\"tenure_type\"].map(\n",
" {\"pct_owner_occupied\": \"Owner\", \"pct_renter_occupied\": \"Renter\"}\n",
")\n",
"\n",
"data = df_stacked.to_dict('records')"
"data = df_stacked.to_dict(\"records\")"
]
},
{
@@ -105,7 +105,14 @@
"outputs": [],
"source": [
"print(\"Highest Renter Neighbourhoods:\")\n",
"df[['neighbourhood_name', 'pct_renter_occupied', 'pct_owner_occupied', 'income_quintile']].head(10)"
"df[\n",
" [\n",
" \"neighbourhood_name\",\n",
" \"pct_renter_occupied\",\n",
" \"pct_owner_occupied\",\n",
" \"income_quintile\",\n",
" ]\n",
"].head(10)"
]
},
{
@@ -116,7 +123,7 @@
"\n",
"### Figure Factory\n",
"\n",
"Uses `create_stacked_bar` from `portfolio_app.figures.bar_charts`.\n",
"Uses `create_stacked_bar` from `portfolio_app.figures.toronto.bar_charts`.\n",
"\n",
"**Key Parameters:**\n",
"- `x_column`: 'neighbourhood_name'\n",
@@ -132,21 +139,22 @@
"outputs": [],
"source": [
"import sys\n",
"sys.path.insert(0, '../..')\n",
"\n",
"from portfolio_app.figures.bar_charts import create_stacked_bar\n",
"sys.path.insert(0, \"../..\")\n",
"\n",
"from portfolio_app.figures.toronto.bar_charts import create_stacked_bar\n",
"\n",
"# Show top 20 by renter percentage\n",
"top_20_names = df.head(20)['neighbourhood_name'].tolist()\n",
"data_filtered = [d for d in data if d['neighbourhood_name'] in top_20_names]\n",
"top_20_names = df.head(20)[\"neighbourhood_name\"].tolist()\n",
"data_filtered = [d for d in data if d[\"neighbourhood_name\"] in top_20_names]\n",
"\n",
"fig = create_stacked_bar(\n",
" data=data_filtered,\n",
" x_column='neighbourhood_name',\n",
" value_column='percentage',\n",
" category_column='tenure_type',\n",
" title='Housing Tenure Mix - Top 20 Renter Neighbourhoods',\n",
" color_map={'Owner': '#4CAF50', 'Renter': '#2196F3'},\n",
" x_column=\"neighbourhood_name\",\n",
" value_column=\"percentage\",\n",
" category_column=\"tenure_type\",\n",
" title=\"Housing Tenure Mix - Top 20 Renter Neighbourhoods\",\n",
" color_map={\"Owner\": \"#4CAF50\", \"Renter\": \"#2196F3\"},\n",
" show_percentages=True,\n",
")\n",
"\n",
@@ -172,7 +180,9 @@
"\n",
"# By income quintile\n",
"print(\"\\nTenure by Income Quintile:\")\n",
"df.groupby('income_quintile')[['pct_owner_occupied', 'pct_renter_occupied']].mean().round(1)"
"df.groupby(\"income_quintile\")[\n",
" [\"pct_owner_occupied\", \"pct_renter_occupied\"]\n",
"].mean().round(1)"
]
}
],

View File

@@ -19,7 +19,7 @@
"\n",
"| Table | Grain | Key Columns |\n",
"|-------|-------|-------------|\n",
"| `mart_neighbourhood_overview` | neighbourhood \u00d7 year | neighbourhood_name, median_household_income, safety_score, population |\n",
"| `mart_neighbourhood_overview` | neighbourhood × year | neighbourhood_name, median_household_income, safety_score, population |\n",
"\n",
"### SQL Query"
]
@@ -30,15 +30,16 @@
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"from sqlalchemy import create_engine\n",
"from dotenv import load_dotenv\n",
"import os\n",
"\n",
"# Load .env from project root\n",
"load_dotenv('../../.env')\n",
"import pandas as pd\n",
"from dotenv import load_dotenv\n",
"from sqlalchemy import create_engine\n",
"\n",
"engine = create_engine(os.environ['DATABASE_URL'])\n",
"# Load .env from project root\n",
"load_dotenv(\"../../.env\")\n",
"\n",
"engine = create_engine(os.environ[\"DATABASE_URL\"])\n",
"\n",
"query = \"\"\"\n",
"SELECT\n",
@@ -77,10 +78,10 @@
"outputs": [],
"source": [
"# Scale income to thousands for better axis readability\n",
"df['income_thousands'] = df['median_household_income'] / 1000\n",
"df[\"income_thousands\"] = df[\"median_household_income\"] / 1000\n",
"\n",
"# Prepare data for figure factory\n",
"data = df.to_dict('records')"
"data = df.to_dict(\"records\")"
]
},
{
@@ -96,7 +97,14 @@
"metadata": {},
"outputs": [],
"source": [
"df[['neighbourhood_name', 'median_household_income', 'safety_score', 'crime_rate_per_100k']].head(10)"
"df[\n",
" [\n",
" \"neighbourhood_name\",\n",
" \"median_household_income\",\n",
" \"safety_score\",\n",
" \"crime_rate_per_100k\",\n",
" ]\n",
"].head(10)"
]
},
{
@@ -107,7 +115,7 @@
"\n",
"### Figure Factory\n",
"\n",
"Uses `create_scatter_figure` from `portfolio_app.figures.scatter`.\n",
"Uses `create_scatter_figure` from `portfolio_app.figures.toronto.scatter`.\n",
"\n",
"**Key Parameters:**\n",
"- `x_column`: 'income_thousands' (median household income in $K)\n",
@@ -124,19 +132,20 @@
"outputs": [],
"source": [
"import sys\n",
"sys.path.insert(0, '../..')\n",
"\n",
"from portfolio_app.figures.scatter import create_scatter_figure\n",
"sys.path.insert(0, \"../..\")\n",
"\n",
"from portfolio_app.figures.toronto.scatter import create_scatter_figure\n",
"\n",
"fig = create_scatter_figure(\n",
" data=data,\n",
" x_column='income_thousands',\n",
" y_column='safety_score',\n",
" name_column='neighbourhood_name',\n",
" size_column='population',\n",
" title='Income vs Safety by Neighbourhood',\n",
" x_title='Median Household Income ($K)',\n",
" y_title='Safety Score (0-100)',\n",
" x_column=\"income_thousands\",\n",
" y_column=\"safety_score\",\n",
" name_column=\"neighbourhood_name\",\n",
" size_column=\"population\",\n",
" title=\"Income vs Safety by Neighbourhood\",\n",
" x_title=\"Median Household Income ($K)\",\n",
" y_title=\"Safety Score (0-100)\",\n",
" trendline=True,\n",
")\n",
"\n",
@@ -166,7 +175,7 @@
"outputs": [],
"source": [
"# Calculate correlation coefficient\n",
"correlation = df['median_household_income'].corr(df['safety_score'])\n",
"correlation = df[\"median_household_income\"].corr(df[\"safety_score\"])\n",
"print(f\"Correlation coefficient (Income vs Safety): {correlation:.3f}\")"
]
}

View File

@@ -29,7 +29,38 @@
"execution_count": null,
"metadata": {},
"outputs": [],
"source": "import pandas as pd\nfrom sqlalchemy import create_engine\nfrom dotenv import load_dotenv\nimport os\n\n# Load .env from project root\nload_dotenv('../../.env')\n\nengine = create_engine(os.environ['DATABASE_URL'])\n\nquery = \"\"\"\nSELECT\n neighbourhood_id,\n neighbourhood_name,\n geometry,\n year,\n livability_score,\n safety_score,\n affordability_score,\n amenity_score,\n population,\n median_household_income\nFROM public_marts.mart_neighbourhood_overview\nWHERE year = (SELECT MAX(year) FROM public_marts.mart_neighbourhood_overview)\nORDER BY livability_score DESC\n\"\"\"\n\ndf = pd.read_sql(query, engine)\nprint(f\"Loaded {len(df)} neighbourhoods\")"
"source": [
"import os\n",
"\n",
"import pandas as pd\n",
"from dotenv import load_dotenv\n",
"from sqlalchemy import create_engine\n",
"\n",
"# Load .env from project root\n",
"load_dotenv(\"../../.env\")\n",
"\n",
"engine = create_engine(os.environ[\"DATABASE_URL\"])\n",
"\n",
"query = \"\"\"\n",
"SELECT\n",
" neighbourhood_id,\n",
" neighbourhood_name,\n",
" geometry,\n",
" year,\n",
" livability_score,\n",
" safety_score,\n",
" affordability_score,\n",
" amenity_score,\n",
" population,\n",
" median_household_income\n",
"FROM public_marts.mart_neighbourhood_overview\n",
"WHERE year = (SELECT MAX(year) FROM public_marts.mart_neighbourhood_overview)\n",
"ORDER BY livability_score DESC\n",
"\"\"\"\n",
"\n",
"df = pd.read_sql(query, engine)\n",
"print(f\"Loaded {len(df)} neighbourhoods\")"
]
},
{
"cell_type": "markdown",
@@ -49,21 +80,20 @@
"outputs": [],
"source": [
"# Transform geometry to GeoJSON\n",
"import geopandas as gpd\n",
"import json\n",
"\n",
"import geopandas as gpd\n",
"\n",
"# Convert WKB geometry to GeoDataFrame\n",
"gdf = gpd.GeoDataFrame(\n",
" df,\n",
" geometry=gpd.GeoSeries.from_wkb(df['geometry']),\n",
" crs='EPSG:4326'\n",
" df, geometry=gpd.GeoSeries.from_wkb(df[\"geometry\"]), crs=\"EPSG:4326\"\n",
")\n",
"\n",
"# Create GeoJSON FeatureCollection\n",
"geojson = json.loads(gdf.to_json())\n",
"\n",
"# Prepare data for figure factory\n",
"data = df.drop(columns=['geometry']).to_dict('records')"
"data = df.drop(columns=[\"geometry\"]).to_dict(\"records\")"
]
},
{
@@ -79,7 +109,15 @@
"metadata": {},
"outputs": [],
"source": [
"df[['neighbourhood_name', 'livability_score', 'safety_score', 'affordability_score', 'amenity_score']].head(10)"
"df[\n",
" [\n",
" \"neighbourhood_name\",\n",
" \"livability_score\",\n",
" \"safety_score\",\n",
" \"affordability_score\",\n",
" \"amenity_score\",\n",
" ]\n",
"].head(10)"
]
},
{
@@ -90,7 +128,7 @@
"\n",
"### Figure Factory\n",
"\n",
"Uses `create_choropleth_figure` from `portfolio_app.figures.choropleth`.\n",
"Uses `create_choropleth_figure` from `portfolio_app.figures.toronto.choropleth`.\n",
"\n",
"**Key Parameters:**\n",
"- `geojson`: GeoJSON FeatureCollection with neighbourhood boundaries\n",
@@ -107,18 +145,24 @@
"outputs": [],
"source": [
"import sys\n",
"sys.path.insert(0, '../..')\n",
"\n",
"from portfolio_app.figures.choropleth import create_choropleth_figure\n",
"sys.path.insert(0, \"../..\")\n",
"\n",
"from portfolio_app.figures.toronto.choropleth import create_choropleth_figure\n",
"\n",
"fig = create_choropleth_figure(\n",
" geojson=geojson,\n",
" data=data,\n",
" location_key='neighbourhood_id',\n",
" color_column='livability_score',\n",
" hover_data=['neighbourhood_name', 'safety_score', 'affordability_score', 'amenity_score'],\n",
" color_scale='RdYlGn',\n",
" title='Toronto Neighbourhood Livability Score',\n",
" location_key=\"neighbourhood_id\",\n",
" color_column=\"livability_score\",\n",
" hover_data=[\n",
" \"neighbourhood_name\",\n",
" \"safety_score\",\n",
" \"affordability_score\",\n",
" \"amenity_score\",\n",
" ],\n",
" color_scale=\"RdYlGn\",\n",
" title=\"Toronto Neighbourhood Livability Score\",\n",
" zoom=10,\n",
")\n",
"\n",

View File

@@ -19,7 +19,7 @@
"\n",
"| Table | Grain | Key Columns |\n",
"|-------|-------|-------------|\n",
"| `mart_neighbourhood_overview` | neighbourhood \u00d7 year | neighbourhood_name, livability_score |\n",
"| `mart_neighbourhood_overview` | neighbourhood × year | neighbourhood_name, livability_score |\n",
"\n",
"### SQL Query"
]
@@ -30,15 +30,16 @@
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"from sqlalchemy import create_engine\n",
"from dotenv import load_dotenv\n",
"import os\n",
"\n",
"# Load .env from project root\n",
"load_dotenv('../../.env')\n",
"import pandas as pd\n",
"from dotenv import load_dotenv\n",
"from sqlalchemy import create_engine\n",
"\n",
"engine = create_engine(os.environ['DATABASE_URL'])\n",
"# Load .env from project root\n",
"load_dotenv(\"../../.env\")\n",
"\n",
"engine = create_engine(os.environ[\"DATABASE_URL\"])\n",
"\n",
"query = \"\"\"\n",
"SELECT\n",
@@ -76,7 +77,7 @@
"source": [
"# The figure factory handles top/bottom selection internally\n",
"# Just prepare as list of dicts\n",
"data = df.to_dict('records')"
"data = df.to_dict(\"records\")"
]
},
{
@@ -106,7 +107,7 @@
"\n",
"### Figure Factory\n",
"\n",
"Uses `create_ranking_bar` from `portfolio_app.figures.bar_charts`.\n",
"Uses `create_ranking_bar` from `portfolio_app.figures.toronto.bar_charts`.\n",
"\n",
"**Key Parameters:**\n",
"- `data`: List of dicts with all neighbourhoods\n",
@@ -123,20 +124,21 @@
"outputs": [],
"source": [
"import sys\n",
"sys.path.insert(0, '../..')\n",
"\n",
"from portfolio_app.figures.bar_charts import create_ranking_bar\n",
"sys.path.insert(0, \"../..\")\n",
"\n",
"from portfolio_app.figures.toronto.bar_charts import create_ranking_bar\n",
"\n",
"fig = create_ranking_bar(\n",
" data=data,\n",
" name_column='neighbourhood_name',\n",
" value_column='livability_score',\n",
" title='Top & Bottom 10 Neighbourhoods by Livability',\n",
" name_column=\"neighbourhood_name\",\n",
" value_column=\"livability_score\",\n",
" title=\"Top & Bottom 10 Neighbourhoods by Livability\",\n",
" top_n=10,\n",
" bottom_n=10,\n",
" color_top='#4CAF50', # Green for top performers\n",
" color_bottom='#F44336', # Red for bottom performers\n",
" value_format='.1f',\n",
" color_top=\"#4CAF50\", # Green for top performers\n",
" color_bottom=\"#F44336\", # Red for bottom performers\n",
" value_format=\".1f\",\n",
")\n",
"\n",
"fig.show()"

View File

@@ -19,7 +19,7 @@
"\n",
"| Table | Grain | Key Columns |\n",
"|-------|-------|-------------|\n",
"| `mart_neighbourhood_safety` | neighbourhood \u00d7 year | assault_count, auto_theft_count, break_enter_count, robbery_count, etc. |\n",
"| `mart_neighbourhood_safety` | neighbourhood × year | assault_count, auto_theft_count, break_enter_count, robbery_count, etc. |\n",
"\n",
"### SQL Query"
]
@@ -30,15 +30,16 @@
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"from sqlalchemy import create_engine\n",
"from dotenv import load_dotenv\n",
"import os\n",
"\n",
"# Load .env from project root\n",
"load_dotenv('../../.env')\n",
"import pandas as pd\n",
"from dotenv import load_dotenv\n",
"from sqlalchemy import create_engine\n",
"\n",
"engine = create_engine(os.environ['DATABASE_URL'])\n",
"# Load .env from project root\n",
"load_dotenv(\"../../.env\")\n",
"\n",
"engine = create_engine(os.environ[\"DATABASE_URL\"])\n",
"\n",
"query = \"\"\"\n",
"SELECT\n",
@@ -79,17 +80,25 @@
"outputs": [],
"source": [
"df_melted = df.melt(\n",
" id_vars=['neighbourhood_name', 'total_incidents'],\n",
" value_vars=['assault_count', 'auto_theft_count', 'break_enter_count', \n",
" 'robbery_count', 'theft_over_count', 'homicide_count'],\n",
" var_name='crime_type',\n",
" value_name='count'\n",
" id_vars=[\"neighbourhood_name\", \"total_incidents\"],\n",
" value_vars=[\n",
" \"assault_count\",\n",
" \"auto_theft_count\",\n",
" \"break_enter_count\",\n",
" \"robbery_count\",\n",
" \"theft_over_count\",\n",
" \"homicide_count\",\n",
" ],\n",
" var_name=\"crime_type\",\n",
" value_name=\"count\",\n",
")\n",
"\n",
"# Clean labels\n",
"df_melted['crime_type'] = df_melted['crime_type'].str.replace('_count', '').str.replace('_', ' ').str.title()\n",
"df_melted[\"crime_type\"] = (\n",
" df_melted[\"crime_type\"].str.replace(\"_count\", \"\").str.replace(\"_\", \" \").str.title()\n",
")\n",
"\n",
"data = df_melted.to_dict('records')"
"data = df_melted.to_dict(\"records\")"
]
},
{
@@ -105,7 +114,15 @@
"metadata": {},
"outputs": [],
"source": [
"df[['neighbourhood_name', 'assault_count', 'auto_theft_count', 'break_enter_count', 'total_incidents']].head(10)"
"df[\n",
" [\n",
" \"neighbourhood_name\",\n",
" \"assault_count\",\n",
" \"auto_theft_count\",\n",
" \"break_enter_count\",\n",
" \"total_incidents\",\n",
" ]\n",
"].head(10)"
]
},
{
@@ -116,7 +133,7 @@
"\n",
"### Figure Factory\n",
"\n",
"Uses `create_stacked_bar` from `portfolio_app.figures.bar_charts`."
"Uses `create_stacked_bar` from `portfolio_app.figures.toronto.bar_charts`."
]
},
{
@@ -126,23 +143,24 @@
"outputs": [],
"source": [
"import sys\n",
"sys.path.insert(0, '../..')\n",
"\n",
"from portfolio_app.figures.bar_charts import create_stacked_bar\n",
"sys.path.insert(0, \"../..\")\n",
"\n",
"from portfolio_app.figures.toronto.bar_charts import create_stacked_bar\n",
"\n",
"fig = create_stacked_bar(\n",
" data=data,\n",
" x_column='neighbourhood_name',\n",
" value_column='count',\n",
" category_column='crime_type',\n",
" title='Crime Type Breakdown - Top 15 Neighbourhoods',\n",
" x_column=\"neighbourhood_name\",\n",
" value_column=\"count\",\n",
" category_column=\"crime_type\",\n",
" title=\"Crime Type Breakdown - Top 15 Neighbourhoods\",\n",
" color_map={\n",
" 'Assault': '#d62728',\n",
" 'Auto Theft': '#ff7f0e',\n",
" 'Break Enter': '#9467bd',\n",
" 'Robbery': '#8c564b',\n",
" 'Theft Over': '#e377c2',\n",
" 'Homicide': '#1f77b4'\n",
" \"Assault\": \"#d62728\",\n",
" \"Auto Theft\": \"#ff7f0e\",\n",
" \"Break Enter\": \"#9467bd\",\n",
" \"Robbery\": \"#8c564b\",\n",
" \"Theft Over\": \"#e377c2\",\n",
" \"Homicide\": \"#1f77b4\",\n",
" },\n",
")\n",
"\n",

View File

@@ -19,7 +19,7 @@
"\n",
"| Table | Grain | Key Columns |\n",
"|-------|-------|-------------|\n",
"| `mart_neighbourhood_safety` | neighbourhood \u00d7 year | crime_rate_per_100k, crime_index, safety_tier, geometry |\n",
"| `mart_neighbourhood_safety` | neighbourhood × year | crime_rate_per_100k, crime_index, safety_tier, geometry |\n",
"\n",
"### SQL Query"
]
@@ -30,15 +30,16 @@
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"from sqlalchemy import create_engine\n",
"from dotenv import load_dotenv\n",
"import os\n",
"\n",
"# Load .env from project root\n",
"load_dotenv('../../.env')\n",
"import pandas as pd\n",
"from dotenv import load_dotenv\n",
"from sqlalchemy import create_engine\n",
"\n",
"engine = create_engine(os.environ['DATABASE_URL'])\n",
"# Load .env from project root\n",
"load_dotenv(\"../../.env\")\n",
"\n",
"engine = create_engine(os.environ[\"DATABASE_URL\"])\n",
"\n",
"query = \"\"\"\n",
"SELECT\n",
@@ -77,17 +78,16 @@
"metadata": {},
"outputs": [],
"source": [
"import geopandas as gpd\n",
"import json\n",
"\n",
"import geopandas as gpd\n",
"\n",
"gdf = gpd.GeoDataFrame(\n",
" df,\n",
" geometry=gpd.GeoSeries.from_wkb(df['geometry']),\n",
" crs='EPSG:4326'\n",
" df, geometry=gpd.GeoSeries.from_wkb(df[\"geometry\"]), crs=\"EPSG:4326\"\n",
")\n",
"\n",
"geojson = json.loads(gdf.to_json())\n",
"data = df.drop(columns=['geometry']).to_dict('records')"
"data = df.drop(columns=[\"geometry\"]).to_dict(\"records\")"
]
},
{
@@ -103,7 +103,15 @@
"metadata": {},
"outputs": [],
"source": [
"df[['neighbourhood_name', 'crime_rate_per_100k', 'crime_index', 'safety_tier', 'total_incidents']].head(10)"
"df[\n",
" [\n",
" \"neighbourhood_name\",\n",
" \"crime_rate_per_100k\",\n",
" \"crime_index\",\n",
" \"safety_tier\",\n",
" \"total_incidents\",\n",
" ]\n",
"].head(10)"
]
},
{
@@ -114,7 +122,7 @@
"\n",
"### Figure Factory\n",
"\n",
"Uses `create_choropleth_figure` from `portfolio_app.figures.choropleth`.\n",
"Uses `create_choropleth_figure` from `portfolio_app.figures.toronto.choropleth`.\n",
"\n",
"**Key Parameters:**\n",
"- `color_column`: 'crime_rate_per_100k'\n",
@@ -128,18 +136,19 @@
"outputs": [],
"source": [
"import sys\n",
"sys.path.insert(0, '../..')\n",
"\n",
"from portfolio_app.figures.choropleth import create_choropleth_figure\n",
"sys.path.insert(0, \"../..\")\n",
"\n",
"from portfolio_app.figures.toronto.choropleth import create_choropleth_figure\n",
"\n",
"fig = create_choropleth_figure(\n",
" geojson=geojson,\n",
" data=data,\n",
" location_key='neighbourhood_id',\n",
" color_column='crime_rate_per_100k',\n",
" hover_data=['neighbourhood_name', 'crime_index', 'total_incidents'],\n",
" color_scale='RdYlGn_r',\n",
" title='Toronto Crime Rate per 100,000 Population',\n",
" location_key=\"neighbourhood_id\",\n",
" color_column=\"crime_rate_per_100k\",\n",
" hover_data=[\"neighbourhood_name\", \"crime_index\", \"total_incidents\"],\n",
" color_scale=\"RdYlGn_r\",\n",
" title=\"Toronto Crime Rate per 100,000 Population\",\n",
" zoom=10,\n",
")\n",
"\n",

View File

@@ -19,7 +19,7 @@
"\n",
"| Table | Grain | Key Columns |\n",
"|-------|-------|-------------|\n",
"| `mart_neighbourhood_safety` | neighbourhood \u00d7 year | year, crime_rate_per_100k, crime_yoy_change_pct |\n",
"| `mart_neighbourhood_safety` | neighbourhood × year | year, crime_rate_per_100k, crime_yoy_change_pct |\n",
"\n",
"### SQL Query"
]
@@ -30,15 +30,16 @@
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"from sqlalchemy import create_engine\n",
"from dotenv import load_dotenv\n",
"import os\n",
"\n",
"# Load .env from project root\n",
"load_dotenv('../../.env')\n",
"import pandas as pd\n",
"from dotenv import load_dotenv\n",
"from sqlalchemy import create_engine\n",
"\n",
"engine = create_engine(os.environ['DATABASE_URL'])\n",
"# Load .env from project root\n",
"load_dotenv(\"../../.env\")\n",
"\n",
"engine = create_engine(os.environ[\"DATABASE_URL\"])\n",
"\n",
"query = \"\"\"\n",
"SELECT\n",
@@ -76,21 +77,23 @@
"metadata": {},
"outputs": [],
"source": [
"df['date'] = pd.to_datetime(df['year'].astype(str) + '-01-01')\n",
"df[\"date\"] = pd.to_datetime(df[\"year\"].astype(str) + \"-01-01\")\n",
"\n",
"# Melt for multi-line\n",
"df_melted = df.melt(\n",
" id_vars=['year', 'date'],\n",
" value_vars=['avg_assault_rate', 'avg_auto_theft_rate', 'avg_break_enter_rate'],\n",
" var_name='crime_type',\n",
" value_name='rate_per_100k'\n",
" id_vars=[\"year\", \"date\"],\n",
" value_vars=[\"avg_assault_rate\", \"avg_auto_theft_rate\", \"avg_break_enter_rate\"],\n",
" var_name=\"crime_type\",\n",
" value_name=\"rate_per_100k\",\n",
")\n",
"\n",
"df_melted['crime_type'] = df_melted['crime_type'].map({\n",
" 'avg_assault_rate': 'Assault',\n",
" 'avg_auto_theft_rate': 'Auto Theft',\n",
" 'avg_break_enter_rate': 'Break & Enter'\n",
"})"
"df_melted[\"crime_type\"] = df_melted[\"crime_type\"].map(\n",
" {\n",
" \"avg_assault_rate\": \"Assault\",\n",
" \"avg_auto_theft_rate\": \"Auto Theft\",\n",
" \"avg_break_enter_rate\": \"Break & Enter\",\n",
" }\n",
")"
]
},
{
@@ -106,7 +109,7 @@
"metadata": {},
"outputs": [],
"source": [
"df[['year', 'avg_crime_rate', 'total_city_incidents', 'avg_yoy_change']]"
"df[[\"year\", \"avg_crime_rate\", \"total_city_incidents\", \"avg_yoy_change\"]]"
]
},
{
@@ -127,22 +130,23 @@
"outputs": [],
"source": [
"import sys\n",
"sys.path.insert(0, '../..')\n",
"\n",
"from portfolio_app.figures.time_series import create_price_time_series\n",
"sys.path.insert(0, \"../..\")\n",
"\n",
"data = df_melted.to_dict('records')\n",
"from portfolio_app.figures.toronto.time_series import create_price_time_series\n",
"\n",
"data = df_melted.to_dict(\"records\")\n",
"\n",
"fig = create_price_time_series(\n",
" data=data,\n",
" date_column='date',\n",
" price_column='rate_per_100k',\n",
" group_column='crime_type',\n",
" title='Toronto Crime Trends by Type (5 Years)',\n",
" date_column=\"date\",\n",
" price_column=\"rate_per_100k\",\n",
" group_column=\"crime_type\",\n",
" title=\"Toronto Crime Trends by Type (5 Years)\",\n",
")\n",
"\n",
"# Remove dollar sign formatting since this is rate data\n",
"fig.update_layout(yaxis_tickprefix='', yaxis_title='Rate per 100K')\n",
"fig.update_layout(yaxis_tickprefix=\"\", yaxis_title=\"Rate per 100K\")\n",
"\n",
"fig.show()"
]
@@ -161,15 +165,19 @@
"outputs": [],
"source": [
"# Total crime rate trend\n",
"total_data = df[['date', 'avg_crime_rate']].rename(columns={'avg_crime_rate': 'total_rate'}).to_dict('records')\n",
"total_data = (\n",
" df[[\"date\", \"avg_crime_rate\"]]\n",
" .rename(columns={\"avg_crime_rate\": \"total_rate\"})\n",
" .to_dict(\"records\")\n",
")\n",
"\n",
"fig2 = create_price_time_series(\n",
" data=total_data,\n",
" date_column='date',\n",
" price_column='total_rate',\n",
" title='Toronto Overall Crime Rate Trend',\n",
" date_column=\"date\",\n",
" price_column=\"total_rate\",\n",
" title=\"Toronto Overall Crime Rate Trend\",\n",
")\n",
"fig2.update_layout(yaxis_tickprefix='', yaxis_title='Rate per 100K')\n",
"fig2.update_layout(yaxis_tickprefix=\"\", yaxis_title=\"Rate per 100K\")\n",
"fig2.show()"
]
}