4 Commits

Author SHA1 Message Date
138e6fe497 feat: Implement Sprint 8 - Portfolio website expansion (MVP)
New pages:
- Home: Redesigned with hero, impact stats, featured project
- About: 6-section professional narrative
- Projects: Hub with 4 project cards and status badges
- Resume: Inline display with download placeholders
- Contact: Form UI (disabled) with contact info
- Blog: Markdown-based system with frontmatter support

Infrastructure:
- Blog system with markdown loader (python-frontmatter, markdown, pygments)
- Sidebar callback for active state highlighting on navigation
- Separated navigation into main pages and projects/dashboards groups

Closes #36, #37, #38, #39, #40, #41, #42, #43

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 15:40:01 -05:00
cd7b5ce154 Merge branch 'development' of ssh://hotserv.tailc9b278.ts.net:2222/lmiranda/personal-portfolio into development 2026-01-15 14:22:53 -05:00
e1135a77a8 Merge pull request 'Added Change Proposal' (#35) from lmiranda-change-proposal into development
Reviewed-on: lmiranda/personal-portfolio#35
2026-01-15 19:19:47 +00:00
39656ca836 Added Change Proposal
Changing the entire page layout and disposition. It should be transformed into a proper project document.
2026-01-15 19:19:19 +00:00
16 changed files with 2457 additions and 134 deletions

View File

@@ -0,0 +1,520 @@
# Leo Miranda — Portfolio Website Blueprint
Structure, navigation, and complete page content
---
## Site Architecture
```
leodata.science
├── Home (Landing)
├── About
├── Projects (Overview + Status)
│ └── [Side Navbar]
│ ├── → Toronto Housing Market Dashboard (live)
│ ├── → US Retail Energy Price Predictor (coming soon)
│ └── → DataFlow Platform (Phase 3)
├── Lab (Bandit Labs / Experiments)
├── Blog
│ └── [Articles]
├── Resume (downloadable + inline)
└── Contact
```
---
## Navigation Structure
Primary Nav: Home | Projects | Lab | Blog | About | Resume
Footer: LinkedIn | GitHub | Email | “Built with Dash & too much coffee”
---
# PAGE CONTENT
---
## 1. HOME (Landing Page)
### Hero Section
Headline:
> I turn messy data into systems that actually work.
Subhead:
> Data Engineer & Analytics Specialist. 8 years building pipelines, dashboards, and the infrastructure nobody sees but everyone depends on. Based in Toronto.
CTA Buttons:
- View Projects → /projects
- Get In Touch → /contact
---
### Quick Impact Strip (Optional — 3-4 stats)
| 1B+ | 40% | 5 Years |
|-------------------------------------------------|------------------------------------|-----------------------------|
| Rows processed daily across enterprise platform | Efficiency gain through automation | Building DataFlow from zero |
---
### Featured Project Card
Toronto Housing Market Dashboard
> Real-time analytics on Torontos housing trends. dbt-powered ETL, Python scraping, Plotly visualization.
> \[View Dashboard\] \[View Repository\]
---
### Brief Intro (2-3 sentences)
Im a data engineer whos spent the last 8 years in the trenches—building the infrastructure that feeds dashboards, automates the boring stuff, and makes data actually usable. Most of my work has been in contact center operations and energy, where Ive had to be scrappy: one-person data teams, legacy systems, stakeholders who need answers yesterday.
I like solving real problems, not theoretical ones.
---
## 2. ABOUT PAGE
### Opening
I didnt start in data. I started in project management—CAPM certified, ITIL trained, the whole corporate playbook. Then I realized I liked building systems more than managing timelines, and I was better at automating reports than attending meetings about them.
That pivot led me to where I am now: 8 years deep in data engineering, analytics, and the messy reality of turning raw information into something people can actually use.
---
### What I Actually Do
The short version: I build data infrastructure. Pipelines, warehouses, dashboards, automation—the invisible machinery that makes businesses run on data instead of gut feelings.
The longer version: At Summitt Energy, Ive been the sole data professional supporting 150+ employees across 9 markets (Canada and US). I inherited nothing—no data warehouse, no reporting infrastructure, no documentation. Over 5 years, I built DataFlow: an enterprise platform processing 1B+ rows, integrating contact center data, CRM systems, and legacy tools that definitely werent designed to talk to each other.
That meant learning to be a generalist. Ive done ETL pipeline development (Python, SQLAlchemy), dimensional modeling, dashboard design (Power BI, Plotly-Dash), API integration, and more stakeholder management than Id like to admit. When youre the only data person, you learn to wear every hat.
---
### How I Think About Data
Im not interested in data for datas sake. The question I always start with: What decision does this help someone make?
Most of my work has been in operations-heavy environments—contact centers, energy retail, logistics. These arent glamorous domains, but theyre where data can have massive impact. A 30% improvement in abandon rate isnt just a metric; its thousands of customers who didnt hang up frustrated. A 40% reduction in reporting time means managers can actually manage instead of wrestling with spreadsheets.
I care about outcomes, not technology stacks.
---
### The Technical Stuff (For Those Who Want It)
Languages: Python (Pandas, SQLAlchemy, FastAPI), SQL (MSSQL, PostgreSQL), R, VBA
Data Engineering: ETL/ELT pipelines, dimensional modeling (star schema), dbt patterns, batch processing, API integration, web scraping (Selenium)
Visualization: Plotly/Dash, Power BI, Tableau
Platforms: Genesys Cloud, Five9, Zoho, Azure DevOps
Currently Learning: Cloud certification (Azure DP-203), Airflow, Snowflake
---
### Outside Work
Im a Brazilian-Canadian based in Toronto. I speak Portuguese (native), English (fluent), and enough Spanish to survive.
When Im not staring at SQL, Im usually:
- Building automation tools for small businesses through Bandit Labs (my side project)
- Contributing to open source (MCP servers, Claude Code plugins)
- Trying to explain to my kid why Daddys job involves “making computers talk to each other”
---
### What Im Looking For
Im currently exploring Senior Data Analyst and Data Engineer roles in the Toronto area (or remote). Im most interested in:
- Companies that treat data as infrastructure, not an afterthought
- Teams where I can contribute to architecture decisions, not just execute tickets
- Operations-focused industries (energy, logistics, financial services, contact center tech)
If that sounds like your team, lets talk.
\[Download Resume\] \[Contact Me\]
---
## 3. PROJECTS PAGE
### Navigation Note
The Projects page serves as an overview and status hub for all projects. A side navbar provides direct links to live dashboards and repositories. Users land on the overview first, then navigate to specific projects via the sidebar.
### Intro Text
These are projects Ive built—some professional (anonymized where needed), some personal. Each one taught me something. Use the sidebar to jump directly to live dashboards or explore the overviews below.
---
### Project Card: Toronto Housing Market Dashboard
Type: Personal Project | Status: Live
The Problem:
Torontos housing market moves fast, and most publicly available data is either outdated, behind paywalls, or scattered across dozens of sources. I wanted a single dashboard that tracked trends in real-time.
What I Built:
- Data Pipeline: Python scraper pulling listings data, automated on schedule
- Transformation Layer: dbt-based SQL architecture (staging → intermediate → marts)
- Visualization: Interactive Plotly-Dash dashboard with filters by neighborhood, price range, property type
- Infrastructure: PostgreSQL backend, version-controlled in Git
Tech Stack: Python, dbt, PostgreSQL, Plotly-Dash, GitHub Actions
What I Learned:
Real estate data is messy as hell. Listings get pulled, prices change, duplicates are everywhere. Building a reliable pipeline meant implementing serious data quality checks and learning to embrace “good enough” over “perfect.”
\[View Live Dashboard\] \[View Repository (ETL + dbt)\]
---
### Project Card: US Retail Energy Price Predictor
Type: Personal Project | Status: Coming Soon (Phase 2)
The Problem:
Retail energy pricing in deregulated US markets is volatile and opaque. Consumers and analysts lack accessible tools to understand pricing trends and forecast where rates are headed.
What Im Building:
- Data Pipeline: Automated ingestion of public pricing data across multiple US markets
- ML Model: Price prediction using time series forecasting (ARIMA, Prophet, or similar)
- Transformation Layer: dbt-based SQL architecture for feature engineering
- Visualization: Interactive dashboard showing historical trends + predictions by state/market
Tech Stack: Python, Scikit-learn, dbt, PostgreSQL, Plotly-Dash
Why This Project:
This showcases the ML side of my skillset—something the Toronto Housing dashboard doesnt cover. It also leverages my domain expertise from 5+ years in retail energy operations.
\[Coming Soon\]
---
### Project Card: DataFlow Platform (Enterprise Case Study)
Type: Professional | Status: Deferred (Phase 3 — requires sanitized codebase)
The Context:
When I joined Summitt Energy, there was no data infrastructure. Reports were manual. Insights were guesswork. I was hired to fix that.
What I Built (Over 5 Years):
- v1 (2020): Basic ETL scripts pulling Genesys Cloud data into MSSQL
- v2 (2021): Dimensional model (star schema) with fact/dimension tables
- v3 (2022): Python refactor with SQLAlchemy ORM, batch processing, error handling
- v4 (2023-24): dbt-pattern SQL views (staging → intermediate → marts), FastAPI layer, CLI tools
Current State:
- 21 tables, 1B+ rows
- 5,000+ daily transactions processed
- Integrates Genesys Cloud, Zoho CRM, legacy systems
- Feeds Power BI prototypes and production Dash dashboards
- Near-zero reporting errors
Impact:
- 40% improvement in reporting efficiency
- 30% reduction in call abandon rate (via KPI framework)
- 50% faster Average Speed to Answer
- 100% callback completion rate
What I Learned:
Building data infrastructure as a team of one forces brutal prioritization. I learned to ship imperfect solutions fast, iterate based on feedback, and never underestimate how long stakeholder buy-in takes.
Note: This is proprietary work. A sanitized case study with architecture patterns (no proprietary data) will be published in Phase 3.
---
### Project Card: AI-Assisted Automation (Bandit Labs)
Type: Consulting/Side Business | Status: Active
What It Is:
Bandit Labs is my consulting practice focused on automation for small businesses. Most clients dont need enterprise data platforms—they need someone to eliminate the 4 hours/week they spend manually entering receipts.
Sample Work:
- Receipt Processing Automation: OCR pipeline (Tesseract, Google Vision) extracting purchase data from photos, pushing directly to QuickBooks. Eliminated 3-4 hours/week of manual entry for a restaurant client.
- Product Margin Tracker: Plotly-Dash dashboard with real-time profitability insights
- Claude Code Plugins: MCP servers for Gitea, Wiki.js, NetBox integration
Why I Do This:
Small businesses are underserved by the data/automation industry. Everyone wants to sell them enterprise software they dont need. I like solving problems at a scale where the impact is immediately visible.
\[Learn More About Bandit Labs\]
---
## 4. LAB PAGE (Bandit Labs / Experiments)
### Intro
This is where I experiment. Some of this becomes client work. Some of it teaches me something and gets abandoned. All of it is real code solving real (or at least real-adjacent) problems.
---
### Bandit Labs — Automation for Small Business
I started Bandit Labs because I kept meeting small business owners drowning in manual work that should have been automated years ago. Enterprise tools are overkill. Custom development is expensive. Theres a gap in the middle.
What I Offer:
- Receipt/invoice processing automation
- Dashboard development (Plotly-Dash)
- Data pipeline setup for non-technical teams
- AI integration for repetitive tasks
Recent Client Work:
- Rio Açaí (Restaurant, Gatineau): Receipt OCR → QuickBooks integration. Saved 3-4 hours/week.
\[Contact for Consulting\]
---
### Open Source / Experiments
MCP Servers (Model Context Protocol)
Ive built production-ready MCP servers for:
- Gitea: Issue management, label operations
- Wiki.js: Documentation access via GraphQL
- NetBox: CMDB integration (DCIM, IPAM, Virtualization)
These let AI assistants (like Claude) interact with infrastructure tools through natural language. Still experimental, but surprisingly useful for my own workflows.
Claude Code Plugins
- projman: AI-guided sprint planning with Gitea/Wiki.js integration
- cmdb-assistant: Conversational infrastructure queries against NetBox
- project-hygiene: Post-task cleanup automation
\[View on GitHub\]
---
## 5. BLOG PAGE
### Intro
I write occasionally about data engineering, automation, and the reality of being a one-person data team. No hot takes, no growth hacking—just things Ive learned the hard way.
---
### Suggested Initial Articles
Article 1: “Building a Data Platform as a Team of One”What I learned from 5 years as the sole data professional at a mid-size company
Outline:
- The reality of “full stack data” when theres no one else
- Prioritization frameworks (what to build first when everything is urgent)
- Technical debt vs. shipping something
- Building stakeholder trust without a team to back you up
- What Id do differently
---
Article 2: “dbt Patterns Without dbt (And Why I Eventually Adopted Them)”How I accidentally implemented analytics engineering best practices before knowing the terminology
Outline:
- The problem: SQL spaghetti in production dashboards
- My solution: staging → intermediate → marts view architecture
- Why separation of concerns matters for maintainability
- The day I discovered dbt and realized Id been doing this manually
- Migration path for legacy SQL codebases
---
Article 3: “The Toronto Housing Market Dashboard: A Data Engineering Postmortem”Building a real-time analytics pipeline for messy, uncooperative data
Outline:
- Why I built this (and why public housing data sucks)
- Data sourcing challenges and ethical scraping
- Pipeline architecture decisions
- dbt transformation layer design
- What broke and how I fixed it
- Dashboard design for non-technical users
---
Article 4: “Automating Small Business Operations with OCR and AI”A case study in practical automation for non-enterprise clients
Outline:
- The client problem: 4 hours/week on receipt entry
- Why “just use \[enterprise tool\]” doesnt work for small business
- Building an OCR pipeline with Tesseract and Google Vision
- QuickBooks integration gotchas
- ROI calculation for automation projects
---
Article 5: “What I Wish I Knew Before Building My First ETL Pipeline”Hard-won lessons for junior data engineers
Outline:
- Error handling isnt optional (its the whole job)
- Logging is your best friend at 2am
- Why idempotency matters
- The staging table pattern
- Testing data pipelines
- Documentation nobody will read (write it anyway)
---
Article 6: “Predicting US Retail Energy Prices: An ML Project Walkthrough”Building a forecasting model with domain knowledge from 5 years in energy retail
Outline:
- Why retail energy pricing is hard to predict (deregulation, seasonality, policy)
- Data sourcing and pipeline architecture
- Feature engineering with dbt
- Model selection (ARIMA vs Prophet vs ensemble)
- Evaluation metrics that matter for price forecasting
- Lessons from applying domain expertise to ML
---
## 6. RESUME PAGE
### Inline Display
Show a clean, readable version of the resume directly on the page. Use your tailored Senior Data Analyst version as the base.
### Download Options
- \[Download PDF\]
- \[Download DOCX\]
- \[View on LinkedIn\]
### Optional: Interactive Timeline
Visual timeline of career progression with expandable sections for each role. More engaging than a wall of text, but only if you have time to build it.
---
## 7. CONTACT PAGE
### Intro
Im currently open to Senior Data Analyst and Data Engineer roles in Toronto (or remote). If youre working on something interesting and need someone who can build data infrastructure from scratch, Id like to hear about it.
For consulting inquiries (automation, dashboards, small business data work), reach out about Bandit Labs.
---
### Contact Form Fields
- Name
- Email
- Subject (dropdown: Job Opportunity / Consulting Inquiry / Other)
- Message
---
### Direct Contact
- Email: leobrmi@hotmail.com
- Phone: (416) 859-7936
- LinkedIn: \[link\]
- GitHub: \[link\]
---
### Location
Toronto, ON, Canada
Canadian Citizen | Eligible to work in Canada and US
---
## TONE GUIDELINES
### Do:
- Be direct and specific
- Use first person naturally
- Include concrete metrics
- Acknowledge constraints and tradeoffs
- Show personality without being performative
- Write like you talk (minus the profanity)
### Dont:
- Use buzzwords without substance (“leveraging synergies”)
- Oversell or inflate
- Write in third person
- Use passive voice excessively
- Sound like a LinkedIn influencer
- Pretend youre a full team when youre one person
---
## SEO / DISCOVERABILITY
### Target Keywords (Organic)
- Toronto data analyst
- Data engineer portfolio
- Python ETL developer
- dbt analytics engineer
- Contact center analytics
### Blog Strategy
Aim for 1-2 posts per month initially. Focus on:
- Technical tutorials (how I built X)
- Lessons learned (what went wrong and how I fixed it)
- Industry observations (data work in operations-heavy companies)
---
## IMPLEMENTATION PRIORITY
### Phase 1 (MVP — Get it live)
1. Home page (hero + brief intro + featured project)
2. About page (full content)
3. Projects page (overview + status cards with navbar links to dashboards)
4. Resume page (inline + download)
5. Contact page (form + direct info)
6. Blog (start with 2-3 articles)
### Phase 2 (Expand)
1. Lab page (Bandit Labs + experiments)
2. US Retail Energy Price Predictor (ML project — coming soon)
3. Add more projects as completed
### Phase 3 (Polish)
1. DataFlow Platform case study (requires sanitized fork of proprietary codebase)
2. Testimonials (if available from Summitt stakeholders)
3. Interactive elements (timeline, project filters)
---
Last updated: January 2025

View File

@@ -1,5 +1,5 @@
"""Application-level callbacks for the portfolio app.""" """Application-level callbacks for the portfolio app."""
from . import theme from . import sidebar, theme
__all__ = ["theme"] __all__ = ["sidebar", "theme"]

View File

@@ -0,0 +1,25 @@
"""Sidebar navigation callbacks for active state updates."""
from typing import Any
from dash import Input, Output, callback
from portfolio_app.components.sidebar import create_sidebar_content
@callback( # type: ignore[misc]
Output("floating-sidebar", "children"),
Input("url", "pathname"),
prevent_initial_call=False,
)
def update_sidebar_active_state(pathname: str) -> list[Any]:
"""Update sidebar to highlight the current page.
Args:
pathname: Current URL pathname from dcc.Location.
Returns:
Updated sidebar content with correct active state.
"""
current_path = pathname or "/"
return create_sidebar_content(current_path=current_path)

View File

@@ -4,9 +4,18 @@ import dash_mantine_components as dmc
from dash import dcc, html from dash import dcc, html
from dash_iconify import DashIconify from dash_iconify import DashIconify
# Navigation items configuration # Navigation items configuration - main pages
NAV_ITEMS = [ NAV_ITEMS_MAIN = [
{"path": "/", "icon": "tabler:home", "label": "Home"}, {"path": "/", "icon": "tabler:home", "label": "Home"},
{"path": "/about", "icon": "tabler:user", "label": "About"},
{"path": "/blog", "icon": "tabler:article", "label": "Blog"},
{"path": "/resume", "icon": "tabler:file-text", "label": "Resume"},
{"path": "/contact", "icon": "tabler:mail", "label": "Contact"},
]
# Navigation items configuration - projects/dashboards (separated)
NAV_ITEMS_PROJECTS = [
{"path": "/projects", "icon": "tabler:folder", "label": "Projects"},
{"path": "/toronto", "icon": "tabler:map-2", "label": "Toronto Housing"}, {"path": "/toronto", "icon": "tabler:map-2", "label": "Toronto Housing"},
] ]
@@ -135,6 +144,59 @@ def create_sidebar_divider() -> html.Div:
return html.Div(className="sidebar-divider") return html.Div(className="sidebar-divider")
def create_sidebar_content(
current_path: str = "/", current_theme: str = "dark"
) -> list[dmc.Tooltip | html.Div]:
"""Create the sidebar content list.
Args:
current_path: Current page path for active state highlighting.
current_theme: Current theme for toggle icon state.
Returns:
List of sidebar components.
"""
return [
# Brand logo
create_brand_logo(),
create_sidebar_divider(),
# Main navigation icons
*[
create_nav_icon(
icon=item["icon"],
label=item["label"],
path=item["path"],
current_path=current_path,
)
for item in NAV_ITEMS_MAIN
],
create_sidebar_divider(),
# Dashboard/Project links
*[
create_nav_icon(
icon=item["icon"],
label=item["label"],
path=item["path"],
current_path=current_path,
)
for item in NAV_ITEMS_PROJECTS
],
create_sidebar_divider(),
# Theme toggle
create_theme_toggle(current_theme),
create_sidebar_divider(),
# External links
*[
create_external_link(
url=link["url"],
icon=link["icon"],
label=link["label"],
)
for link in EXTERNAL_LINKS
],
]
def create_sidebar(current_path: str = "/", current_theme: str = "dark") -> html.Div: def create_sidebar(current_path: str = "/", current_theme: str = "dark") -> html.Div:
"""Create the floating sidebar navigation. """Create the floating sidebar navigation.
@@ -146,34 +208,7 @@ def create_sidebar(current_path: str = "/", current_theme: str = "dark") -> html
Complete sidebar component. Complete sidebar component.
""" """
return html.Div( return html.Div(
[
# Brand logo
create_brand_logo(),
create_sidebar_divider(),
# Navigation icons
*[
create_nav_icon(
icon=item["icon"],
label=item["label"],
path=item["path"],
current_path=current_path,
)
for item in NAV_ITEMS
],
create_sidebar_divider(),
# Theme toggle
create_theme_toggle(current_theme),
create_sidebar_divider(),
# External links
*[
create_external_link(
url=link["url"],
icon=link["icon"],
label=link["label"],
)
for link in EXTERNAL_LINKS
],
],
className="floating-sidebar",
id="floating-sidebar", id="floating-sidebar",
className="floating-sidebar",
children=create_sidebar_content(current_path, current_theme),
) )

View File

@@ -0,0 +1,111 @@
---
title: "Building a Data Platform as a Team of One"
date: "2025-01-15"
description: "What I learned from 5 years as the sole data professional at a mid-size company"
tags:
- data-engineering
- career
- lessons-learned
status: published
---
When I joined Summitt Energy in 2019, there was no data infrastructure. No warehouse. No pipelines. No documentation. Just a collection of spreadsheets and a Genesys Cloud instance spitting out CSVs.
Five years later, I'd built DataFlow: an enterprise platform processing 1B+ rows across 21 tables, feeding dashboards that executives actually opened. Here's what I learned doing it alone.
## The Reality of "Full Stack Data"
When you're the only data person, "full stack" isn't a buzzword—it's survival. In a single week, I might:
- Debug a Python ETL script at 7am because overnight loads failed
- Present quarterly metrics to leadership at 10am
- Design a new dimensional model over lunch
- Write SQL transformations in the afternoon
- Handle ad-hoc "can you pull this data?" requests between meetings
There's no handoff. No "that's not my job." Everything is your job.
## Prioritization Frameworks
The hardest part isn't the technical work—it's deciding what to build first when everything feels urgent.
### The 80/20 Rule, Applied Ruthlessly
I asked myself: **What 20% of the data drives 80% of decisions?**
For a contact center, that turned out to be:
- Call volume by interval
- Abandon rate
- Average handle time
- Service level
Everything else was nice-to-have. I built those four metrics first, got them bulletproof, then expanded.
### The "Who's Screaming?" Test
When multiple stakeholders want different things:
1. Who has executive backing?
2. What's blocking revenue?
3. What's causing visible pain?
If nobody's screaming, it can probably wait.
## Technical Debt vs. Shipping
I rewrote DataFlow three times:
- **v1 (2020)**: Hacky Python scripts. Worked, barely.
- **v2 (2021)**: Proper dimensional model. Still messy code.
- **v3 (2022)**: SQLAlchemy ORM, proper error handling, logging.
- **v4 (2023)**: dbt-style transformations, FastAPI layer.
Was v1 embarrassing? Yes. Did it work? Also yes.
**The lesson**: Ship something that works, then iterate. Perfect is the enemy of done, especially when you're alone.
## Building Stakeholder Trust
The technical work is maybe 40% of the job. The rest is politics.
### Quick Wins First
Before asking for resources or patience, I delivered:
- Automated a weekly report that took someone 4 hours
- Fixed a dashboard that had been wrong for months
- Built a simple tool that answered a frequent question
Trust is earned in small deposits.
### Speak Their Language
Executives don't care about your star schema. They care about:
- "This will save 10 hours/week"
- "This will catch errors before they hit customers"
- "This will let you see X in real-time"
Translate technical work into business outcomes.
## What I'd Do Differently
1. **Document earlier**. I waited too long. When I finally wrote things down, onboarding became possible.
2. **Say no more**. Every "yes" to an ad-hoc request is a "no" to infrastructure work. Guard your time.
3. **Build monitoring first**. I spent too many mornings discovering failures manually. Alerting should be table stakes.
4. **Version control everything**. Even SQL. Even documentation. If it's not in Git, it doesn't exist.
## The Upside
Being a team of one forced me to learn things I'd have specialized away from on a bigger team:
- Data modeling
- Pipeline architecture
- Dashboard design
- Stakeholder management
- System administration
It's brutal, but it makes you dangerous. You understand the whole stack.
---
*This is part of a series on building data infrastructure at small companies. More posts coming on dimensional modeling, dbt patterns, and surviving legacy systems.*

View File

@@ -0,0 +1,248 @@
"""About page - Professional narrative and background."""
import dash
import dash_mantine_components as dmc
from dash import dcc
from dash_iconify import DashIconify
dash.register_page(__name__, path="/about", name="About")
# Opening section
OPENING = """I didn't start in data. I started in project management—CAPM certified, ITIL trained, \
the whole corporate playbook. Then I realized I liked building systems more than managing timelines, \
and I was better at automating reports than attending meetings about them.
That pivot led me to where I am now: 8 years deep in data engineering, analytics, and the messy \
reality of turning raw information into something people can actually use."""
# What I Actually Do section
WHAT_I_DO_SHORT = "The short version: I build data infrastructure. Pipelines, warehouses, \
dashboards, automation—the invisible machinery that makes businesses run on data instead of gut feelings."
WHAT_I_DO_LONG = """The longer version: At Summitt Energy, I've been the sole data professional \
supporting 150+ employees across 9 markets (Canada and US). I inherited nothing—no data warehouse, \
no reporting infrastructure, no documentation. Over 5 years, I built DataFlow: an enterprise \
platform processing 1B+ rows, integrating contact center data, CRM systems, and legacy tools \
that definitely weren't designed to talk to each other.
That meant learning to be a generalist. I've done ETL pipeline development (Python, SQLAlchemy), \
dimensional modeling, dashboard design (Power BI, Plotly-Dash), API integration, and more \
stakeholder management than I'd like to admit. When you're the only data person, you learn to wear every hat."""
# How I Think About Data
DATA_PHILOSOPHY_INTRO = "I'm not interested in data for data's sake. The question I always \
start with: What decision does this help someone make?"
DATA_PHILOSOPHY_DETAIL = """Most of my work has been in operations-heavy environments—contact \
centers, energy retail, logistics. These aren't glamorous domains, but they're where data can \
have massive impact. A 30% improvement in abandon rate isn't just a metric; it's thousands of \
customers who didn't hang up frustrated. A 40% reduction in reporting time means managers can \
actually manage instead of wrestling with spreadsheets."""
DATA_PHILOSOPHY_CLOSE = "I care about outcomes, not technology stacks."
# Technical skills
TECH_SKILLS = {
"Languages": "Python (Pandas, SQLAlchemy, FastAPI), SQL (MSSQL, PostgreSQL), R, VBA",
"Data Engineering": "ETL/ELT pipelines, dimensional modeling (star schema), dbt patterns, batch processing, API integration, web scraping (Selenium)",
"Visualization": "Plotly/Dash, Power BI, Tableau",
"Platforms": "Genesys Cloud, Five9, Zoho, Azure DevOps",
"Currently Learning": "Cloud certification (Azure DP-203), Airflow, Snowflake",
}
# Outside Work
OUTSIDE_WORK_INTRO = "I'm a Brazilian-Canadian based in Toronto. I speak Portuguese (native), \
English (fluent), and enough Spanish to survive."
OUTSIDE_WORK_ACTIVITIES = [
"Building automation tools for small businesses through Bandit Labs (my side project)",
"Contributing to open source (MCP servers, Claude Code plugins)",
'Trying to explain to my kid why Daddy\'s job involves "making computers talk to each other"',
]
# What I'm Looking For
LOOKING_FOR_INTRO = "I'm currently exploring Senior Data Analyst and Data Engineer roles in \
the Toronto area (or remote). I'm most interested in:"
LOOKING_FOR_ITEMS = [
"Companies that treat data as infrastructure, not an afterthought",
"Teams where I can contribute to architecture decisions, not just execute tickets",
"Operations-focused industries (energy, logistics, financial services, contact center tech)",
]
LOOKING_FOR_CLOSE = "If that sounds like your team, let's talk."
def create_section_title(title: str) -> dmc.Title:
"""Create a consistent section title."""
return dmc.Title(title, order=2, size="h3", mb="sm")
def create_opening_section() -> dmc.Paper:
"""Create the opening/intro section."""
paragraphs = OPENING.split("\n\n")
return dmc.Paper(
dmc.Stack(
[dmc.Text(p, size="md") for p in paragraphs],
gap="md",
),
p="xl",
radius="md",
withBorder=True,
)
def create_what_i_do_section() -> dmc.Paper:
"""Create the What I Actually Do section."""
return dmc.Paper(
dmc.Stack(
[
create_section_title("What I Actually Do"),
dmc.Text(WHAT_I_DO_SHORT, size="md", fw=500),
dmc.Text(WHAT_I_DO_LONG, size="md"),
],
gap="md",
),
p="xl",
radius="md",
withBorder=True,
)
def create_philosophy_section() -> dmc.Paper:
"""Create the How I Think About Data section."""
return dmc.Paper(
dmc.Stack(
[
create_section_title("How I Think About Data"),
dmc.Text(DATA_PHILOSOPHY_INTRO, size="md", fw=500),
dmc.Text(DATA_PHILOSOPHY_DETAIL, size="md"),
dmc.Text(DATA_PHILOSOPHY_CLOSE, size="md", fw=500, fs="italic"),
],
gap="md",
),
p="xl",
radius="md",
withBorder=True,
)
def create_tech_section() -> dmc.Paper:
"""Create the Technical Stuff section."""
return dmc.Paper(
dmc.Stack(
[
create_section_title("The Technical Stuff"),
dmc.Stack(
[
dmc.Group(
[
dmc.Text(category + ":", fw=600, size="sm", w=150),
dmc.Text(skills, size="sm", c="dimmed"),
],
gap="sm",
align="flex-start",
wrap="nowrap",
)
for category, skills in TECH_SKILLS.items()
],
gap="xs",
),
],
gap="md",
),
p="xl",
radius="md",
withBorder=True,
)
def create_outside_work_section() -> dmc.Paper:
"""Create the Outside Work section."""
return dmc.Paper(
dmc.Stack(
[
create_section_title("Outside Work"),
dmc.Text(OUTSIDE_WORK_INTRO, size="md"),
dmc.Text("When I'm not staring at SQL, I'm usually:", size="md"),
dmc.List(
[
dmc.ListItem(dmc.Text(item, size="md"))
for item in OUTSIDE_WORK_ACTIVITIES
],
spacing="xs",
),
],
gap="md",
),
p="xl",
radius="md",
withBorder=True,
)
def create_looking_for_section() -> dmc.Paper:
"""Create the What I'm Looking For section."""
return dmc.Paper(
dmc.Stack(
[
create_section_title("What I'm Looking For"),
dmc.Text(LOOKING_FOR_INTRO, size="md"),
dmc.List(
[
dmc.ListItem(dmc.Text(item, size="md"))
for item in LOOKING_FOR_ITEMS
],
spacing="xs",
),
dmc.Text(LOOKING_FOR_CLOSE, size="md", fw=500),
dmc.Group(
[
dcc.Link(
dmc.Button(
"Download Resume",
variant="filled",
leftSection=DashIconify(
icon="tabler:download", width=18
),
),
href="/resume",
),
dcc.Link(
dmc.Button(
"Contact Me",
variant="outline",
leftSection=DashIconify(icon="tabler:mail", width=18),
),
href="/contact",
),
],
gap="sm",
mt="md",
),
],
gap="md",
),
p="xl",
radius="md",
withBorder=True,
)
layout = dmc.Container(
dmc.Stack(
[
dmc.Title("About", order=1, ta="center", mb="lg"),
create_opening_section(),
create_what_i_do_section(),
create_philosophy_section(),
create_tech_section(),
create_outside_work_section(),
create_looking_for_section(),
dmc.Space(h=40),
],
gap="xl",
),
size="md",
py="xl",
)

View File

@@ -0,0 +1 @@
"""Blog pages package."""

View File

@@ -0,0 +1,147 @@
"""Blog article page - Dynamic routing for individual articles."""
import dash
import dash_mantine_components as dmc
from dash import dcc, html
from dash_iconify import DashIconify
from portfolio_app.utils.markdown_loader import get_article
dash.register_page(
__name__,
path_template="/blog/<slug>",
name="Article",
)
def create_not_found() -> dmc.Container:
"""Create 404 state for missing articles."""
return dmc.Container(
dmc.Stack(
[
dmc.ThemeIcon(
DashIconify(icon="tabler:file-unknown", width=48),
size=80,
radius="xl",
variant="light",
color="red",
),
dmc.Title("Article Not Found", order=2),
dmc.Text(
"The article you're looking for doesn't exist or has been moved.",
size="md",
c="dimmed",
ta="center",
),
dcc.Link(
dmc.Button(
"Back to Blog",
variant="light",
leftSection=DashIconify(icon="tabler:arrow-left", width=18),
),
href="/blog",
),
],
align="center",
gap="md",
py="xl",
),
size="md",
py="xl",
)
def layout(slug: str = "") -> dmc.Container:
"""Generate the article layout dynamically.
Args:
slug: Article slug from URL path.
"""
if not slug:
return create_not_found()
article = get_article(slug)
if not article:
return create_not_found()
meta = article["meta"]
return dmc.Container(
dmc.Stack(
[
# Back link
dcc.Link(
dmc.Group(
[
DashIconify(icon="tabler:arrow-left", width=16),
dmc.Text("Back to Blog", size="sm"),
],
gap="xs",
),
href="/blog",
style={"textDecoration": "none"},
),
# Article header
dmc.Paper(
dmc.Stack(
[
dmc.Title(meta["title"], order=1),
dmc.Group(
[
dmc.Group(
[
DashIconify(
icon="tabler:calendar", width=16
),
dmc.Text(
meta["date"], size="sm", c="dimmed"
),
],
gap="xs",
),
dmc.Group(
[
dmc.Badge(tag, variant="light", size="sm")
for tag in meta.get("tags", [])
],
gap="xs",
),
],
justify="space-between",
wrap="wrap",
),
(
dmc.Text(meta["description"], size="lg", c="dimmed")
if meta.get("description")
else None
),
],
gap="sm",
),
p="xl",
radius="md",
withBorder=True,
),
# Article content
dmc.Paper(
html.Div(
# Render HTML content from markdown
# Using dangerously_allow_html via dcc.Markdown or html.Div
dcc.Markdown(
article["content"],
className="article-content",
dangerously_allow_html=True,
),
),
p="xl",
radius="md",
withBorder=True,
className="article-body",
),
dmc.Space(h=40),
],
gap="lg",
),
size="md",
py="xl",
)

View File

@@ -0,0 +1,113 @@
"""Blog index page - Article listing."""
import dash
import dash_mantine_components as dmc
from dash import dcc
from dash_iconify import DashIconify
from portfolio_app.utils.markdown_loader import Article, get_all_articles
dash.register_page(__name__, path="/blog", name="Blog")
# Page intro
INTRO_TEXT = (
"I write occasionally about data engineering, automation, and the reality of being "
"a one-person data team. No hot takes, no growth hacking—just things I've learned "
"the hard way."
)
def create_article_card(article: Article) -> dmc.Paper:
"""Create an article preview card."""
meta = article["meta"]
return dmc.Paper(
dcc.Link(
dmc.Stack(
[
dmc.Group(
[
dmc.Text(meta["title"], fw=600, size="lg"),
dmc.Text(meta["date"], size="sm", c="dimmed"),
],
justify="space-between",
align="flex-start",
wrap="wrap",
),
dmc.Text(meta["description"], size="md", c="dimmed", lineClamp=2),
dmc.Group(
[
dmc.Badge(tag, variant="light", size="sm")
for tag in meta.get("tags", [])[:3]
],
gap="xs",
),
],
gap="sm",
),
href=f"/blog/{meta['slug']}",
style={"textDecoration": "none", "color": "inherit"},
),
p="lg",
radius="md",
withBorder=True,
className="article-card",
)
def create_empty_state() -> dmc.Paper:
"""Create empty state when no articles exist."""
return dmc.Paper(
dmc.Stack(
[
dmc.ThemeIcon(
DashIconify(icon="tabler:article-off", width=48),
size=80,
radius="xl",
variant="light",
color="gray",
),
dmc.Title("No Articles Yet", order=3),
dmc.Text(
"Articles are coming soon. Check back later!",
size="md",
c="dimmed",
ta="center",
),
],
align="center",
gap="md",
py="xl",
),
p="xl",
radius="md",
withBorder=True,
)
def layout() -> dmc.Container:
"""Generate the blog index layout dynamically."""
articles = get_all_articles(include_drafts=False)
return dmc.Container(
dmc.Stack(
[
dmc.Title("Blog", order=1, ta="center"),
dmc.Text(
INTRO_TEXT, size="md", c="dimmed", ta="center", maw=600, mx="auto"
),
dmc.Divider(my="lg"),
(
dmc.Stack(
[create_article_card(article) for article in articles],
gap="lg",
)
if articles
else create_empty_state()
),
dmc.Space(h=40),
],
gap="lg",
),
size="md",
py="xl",
)

View File

@@ -0,0 +1,287 @@
"""Contact page - Form UI and direct contact information."""
import dash
import dash_mantine_components as dmc
from dash_iconify import DashIconify
dash.register_page(__name__, path="/contact", name="Contact")
# Contact information
CONTACT_INFO = {
"email": "leobrmi@hotmail.com",
"phone": "(416) 859-7936",
"linkedin": "https://linkedin.com/in/leobmiranda",
"github": "https://github.com/leomiranda",
"location": "Toronto, ON, Canada",
}
# Page intro text
INTRO_TEXT = (
"I'm currently open to Senior Data Analyst and Data Engineer roles in Toronto "
"(or remote). If you're working on something interesting and need someone who can "
"build data infrastructure from scratch, I'd like to hear about it."
)
CONSULTING_TEXT = (
"For consulting inquiries (automation, dashboards, small business data work), "
"reach out about Bandit Labs."
)
# Form subject options
SUBJECT_OPTIONS = [
{"value": "job", "label": "Job Opportunity"},
{"value": "consulting", "label": "Consulting Inquiry"},
{"value": "other", "label": "Other"},
]
def create_intro_section() -> dmc.Stack:
"""Create the intro text section."""
return dmc.Stack(
[
dmc.Title("Get In Touch", order=1, ta="center"),
dmc.Text(INTRO_TEXT, size="md", ta="center", maw=600, mx="auto"),
dmc.Text(
CONSULTING_TEXT, size="md", ta="center", maw=600, mx="auto", c="dimmed"
),
],
gap="md",
mb="xl",
)
def create_contact_form() -> dmc.Paper:
"""Create the contact form (disabled in Phase 1)."""
return dmc.Paper(
dmc.Stack(
[
dmc.Title("Send a Message", order=2, size="h4"),
dmc.Alert(
"Contact form submission is coming soon. Please use the direct contact "
"methods below for now.",
title="Form Coming Soon",
color="blue",
variant="light",
),
dmc.TextInput(
label="Name",
placeholder="Your name",
leftSection=DashIconify(icon="tabler:user", width=18),
disabled=True,
),
dmc.TextInput(
label="Email",
placeholder="your.email@example.com",
leftSection=DashIconify(icon="tabler:mail", width=18),
disabled=True,
),
dmc.Select(
label="Subject",
placeholder="Select a subject",
data=SUBJECT_OPTIONS,
leftSection=DashIconify(icon="tabler:tag", width=18),
disabled=True,
),
dmc.Textarea(
label="Message",
placeholder="Your message...",
minRows=4,
disabled=True,
),
dmc.Button(
"Send Message",
fullWidth=True,
leftSection=DashIconify(icon="tabler:send", width=18),
disabled=True,
),
],
gap="md",
),
p="xl",
radius="md",
withBorder=True,
)
def create_direct_contact() -> dmc.Paper:
"""Create the direct contact information section."""
return dmc.Paper(
dmc.Stack(
[
dmc.Title("Direct Contact", order=2, size="h4"),
dmc.Stack(
[
# Email
dmc.Group(
[
dmc.ThemeIcon(
DashIconify(icon="tabler:mail", width=20),
size="lg",
radius="md",
variant="light",
),
dmc.Stack(
[
dmc.Text("Email", size="sm", c="dimmed"),
dmc.Anchor(
CONTACT_INFO["email"],
href=f"mailto:{CONTACT_INFO['email']}",
size="md",
fw=500,
),
],
gap=0,
),
],
gap="md",
),
# Phone
dmc.Group(
[
dmc.ThemeIcon(
DashIconify(icon="tabler:phone", width=20),
size="lg",
radius="md",
variant="light",
),
dmc.Stack(
[
dmc.Text("Phone", size="sm", c="dimmed"),
dmc.Anchor(
CONTACT_INFO["phone"],
href=f"tel:{CONTACT_INFO['phone'].replace('(', '').replace(')', '').replace(' ', '').replace('-', '')}",
size="md",
fw=500,
),
],
gap=0,
),
],
gap="md",
),
# LinkedIn
dmc.Group(
[
dmc.ThemeIcon(
DashIconify(icon="tabler:brand-linkedin", width=20),
size="lg",
radius="md",
variant="light",
color="blue",
),
dmc.Stack(
[
dmc.Text("LinkedIn", size="sm", c="dimmed"),
dmc.Anchor(
"linkedin.com/in/leobmiranda",
href=CONTACT_INFO["linkedin"],
target="_blank",
size="md",
fw=500,
),
],
gap=0,
),
],
gap="md",
),
# GitHub
dmc.Group(
[
dmc.ThemeIcon(
DashIconify(icon="tabler:brand-github", width=20),
size="lg",
radius="md",
variant="light",
),
dmc.Stack(
[
dmc.Text("GitHub", size="sm", c="dimmed"),
dmc.Anchor(
"github.com/leomiranda",
href=CONTACT_INFO["github"],
target="_blank",
size="md",
fw=500,
),
],
gap=0,
),
],
gap="md",
),
],
gap="lg",
),
],
gap="lg",
),
p="xl",
radius="md",
withBorder=True,
)
def create_location_section() -> dmc.Paper:
"""Create the location and work eligibility section."""
return dmc.Paper(
dmc.Stack(
[
dmc.Title("Location", order=2, size="h4"),
dmc.Group(
[
dmc.ThemeIcon(
DashIconify(icon="tabler:map-pin", width=20),
size="lg",
radius="md",
variant="light",
color="red",
),
dmc.Stack(
[
dmc.Text(CONTACT_INFO["location"], size="md", fw=500),
dmc.Text(
"Canadian Citizen | Eligible to work in Canada and US",
size="sm",
c="dimmed",
),
],
gap=0,
),
],
gap="md",
),
],
gap="md",
),
p="xl",
radius="md",
withBorder=True,
)
layout = dmc.Container(
dmc.Stack(
[
create_intro_section(),
dmc.SimpleGrid(
[
create_contact_form(),
dmc.Stack(
[
create_direct_contact(),
create_location_section(),
],
gap="lg",
),
],
cols={"base": 1, "md": 2},
spacing="xl",
),
dmc.Space(h=40),
],
gap="lg",
),
size="lg",
py="xl",
)

View File

@@ -1,81 +1,118 @@
"""Bio landing page.""" """Home landing page - Portfolio entry point."""
import dash import dash
import dash_mantine_components as dmc import dash_mantine_components as dmc
from dash import dcc
from dash_iconify import DashIconify
dash.register_page(__name__, path="/", name="Home") dash.register_page(__name__, path="/", name="Home")
# Content from bio_content_v2.md # Hero content from blueprint
HEADLINE = "Leo | Data Engineer & Analytics Developer" HEADLINE = "I turn messy data into systems that actually work."
TAGLINE = "I build data infrastructure that actually gets used." SUBHEAD = (
"Data Engineer & Analytics Specialist. 8 years building pipelines, dashboards, "
"and the infrastructure nobody sees but everyone depends on. Based in Toronto."
)
SUMMARY = """Over the past 5 years, I've designed and evolved an enterprise analytics platform # Impact metrics
from scratch—now processing 1B+ rows across 21 tables with Python-based ETL pipelines and IMPACT_STATS = [
dbt-style SQL transformations. The result: 40% efficiency gains, 30% reduction in call {"value": "1B+", "label": "Rows processed daily across enterprise platform"},
abandon rates, and dashboards that executives actually open. {"value": "40%", "label": "Efficiency gain through automation"},
{"value": "5 Years", "label": "Building DataFlow from zero"},
My approach: dimensional modeling (star schema), layered transformations
(staging → intermediate → marts), and automation that eliminates manual work.
I've built everything from self-service analytics portals to OCR-powered receipt processing systems.
Currently at Summitt Energy supporting multi-market operations across Canada and 8 US states.
Previously cut my teeth on IT infrastructure projects at Petrobras (Fortune 500) and the
Project Management Institute."""
TECH_STACK = [
"Python",
"Pandas",
"SQLAlchemy",
"FastAPI",
"SQL",
"PostgreSQL",
"MSSQL",
"Power BI",
"Plotly/Dash",
"dbt patterns",
"Genesys Cloud",
] ]
PROJECTS = [ # Featured project
{ FEATURED_PROJECT = {
"title": "Toronto Housing Dashboard", "title": "Toronto Housing Market Dashboard",
"description": "Choropleth visualization of GTA real estate trends with TRREB and CMHC data.", "description": (
"status": "In Development", "Real-time analytics on Toronto's housing trends. "
"link": "/toronto", "dbt-powered ETL, Python scraping, Plotly visualization."
}, ),
{ "status": "Live",
"title": "Energy Pricing Analysis", "dashboard_link": "/toronto",
"description": "Time series analysis and ML prediction for utility market pricing.", "repo_link": "https://github.com/leomiranda/personal-portfolio",
"status": "Planned", }
"link": "/energy",
},
]
AVAILABILITY = "Open to Senior Data Analyst, Analytics Engineer, and BI Developer opportunities in Toronto or remote." # Brief intro
INTRO_TEXT = (
"I'm a data engineer who's spent the last 8 years in the trenches—building the "
"infrastructure that feeds dashboards, automates the boring stuff, and makes data "
"actually usable. Most of my work has been in contact center operations and energy, "
"where I've had to be scrappy: one-person data teams, legacy systems, stakeholders "
"who need answers yesterday."
)
INTRO_CLOSING = "I like solving real problems, not theoretical ones."
def create_hero_section() -> dmc.Stack: def create_hero_section() -> dmc.Stack:
"""Create the hero section with name and tagline.""" """Create the hero section with headline, subhead, and CTAs."""
return dmc.Stack( return dmc.Stack(
[ [
dmc.Title(HEADLINE, order=1, ta="center"), dmc.Title(
dmc.Text(TAGLINE, size="xl", c="dimmed", ta="center"), HEADLINE,
order=1,
ta="center",
size="2.5rem",
),
dmc.Text(
SUBHEAD,
size="lg",
c="dimmed",
ta="center",
maw=700,
mx="auto",
),
dmc.Group(
[
dcc.Link(
dmc.Button(
"View Projects",
size="lg",
variant="filled",
leftSection=DashIconify(icon="tabler:folder", width=20),
),
href="/projects",
),
dcc.Link(
dmc.Button(
"Get In Touch",
size="lg",
variant="outline",
leftSection=DashIconify(icon="tabler:mail", width=20),
),
href="/contact",
),
],
justify="center",
gap="md",
mt="md",
),
], ],
gap="xs", gap="md",
py="xl", py="xl",
) )
def create_summary_section() -> dmc.Paper: def create_impact_stat(stat: dict[str, str]) -> dmc.Stack:
"""Create the professional summary section.""" """Create a single impact stat."""
paragraphs = SUMMARY.strip().split("\n\n") return dmc.Stack(
[
dmc.Text(stat["value"], fw=700, size="2rem", ta="center"),
dmc.Text(stat["label"], size="sm", c="dimmed", ta="center"),
],
gap="xs",
align="center",
)
def create_impact_strip() -> dmc.Paper:
"""Create the impact statistics strip."""
return dmc.Paper( return dmc.Paper(
dmc.Stack( dmc.SimpleGrid(
[ [create_impact_stat(stat) for stat in IMPACT_STATS],
dmc.Title("About", order=2, size="h3"), cols={"base": 1, "sm": 3},
*[dmc.Text(p.replace("\n", " "), size="md") for p in paragraphs], spacing="xl",
],
gap="md",
), ),
p="xl", p="xl",
radius="md", radius="md",
@@ -83,16 +120,56 @@ def create_summary_section() -> dmc.Paper:
) )
def create_tech_stack_section() -> dmc.Paper: def create_featured_project() -> dmc.Paper:
"""Create the tech stack section with badges.""" """Create the featured project card."""
return dmc.Paper( return dmc.Paper(
dmc.Stack( dmc.Stack(
[ [
dmc.Title("Tech Stack", order=2, size="h3"),
dmc.Group( dmc.Group(
[ [
dmc.Badge(tech, size="lg", variant="light", radius="sm") dmc.Title("Featured Project", order=2, size="h3"),
for tech in TECH_STACK dmc.Badge(
FEATURED_PROJECT["status"],
color="green",
variant="light",
size="lg",
),
],
justify="space-between",
),
dmc.Title(
FEATURED_PROJECT["title"],
order=3,
size="h4",
),
dmc.Text(
FEATURED_PROJECT["description"],
size="md",
c="dimmed",
),
dmc.Group(
[
dcc.Link(
dmc.Button(
"View Dashboard",
variant="light",
leftSection=DashIconify(
icon="tabler:chart-bar", width=18
),
),
href=FEATURED_PROJECT["dashboard_link"],
),
dmc.Anchor(
dmc.Button(
"View Repository",
variant="subtle",
leftSection=DashIconify(
icon="tabler:brand-github", width=18
),
),
href=FEATURED_PROJECT["repo_link"],
target="_blank",
),
], ],
gap="sm", gap="sm",
), ),
@@ -105,38 +182,13 @@ def create_tech_stack_section() -> dmc.Paper:
) )
def create_project_card(project: dict[str, str]) -> dmc.Card: def create_intro_section() -> dmc.Paper:
"""Create a project card.""" """Create the brief intro section."""
status_color = "blue" if project["status"] == "In Development" else "gray"
return dmc.Card(
[
dmc.Group(
[
dmc.Text(project["title"], fw=500, size="lg"),
dmc.Badge(project["status"], color=status_color, variant="light"),
],
justify="space-between",
align="center",
),
dmc.Text(project["description"], size="sm", c="dimmed", mt="sm"),
],
withBorder=True,
radius="md",
p="lg",
)
def create_projects_section() -> dmc.Paper:
"""Create the portfolio projects section."""
return dmc.Paper( return dmc.Paper(
dmc.Stack( dmc.Stack(
[ [
dmc.Title("Portfolio Projects", order=2, size="h3"), dmc.Text(INTRO_TEXT, size="md"),
dmc.SimpleGrid( dmc.Text(INTRO_CLOSING, size="md", fw=500, fs="italic"),
[create_project_card(p) for p in PROJECTS],
cols={"base": 1, "sm": 2},
spacing="lg",
),
], ],
gap="md", gap="md",
), ),
@@ -146,20 +198,13 @@ def create_projects_section() -> dmc.Paper:
) )
def create_availability_section() -> dmc.Text:
"""Create the availability statement."""
return dmc.Text(AVAILABILITY, size="sm", c="dimmed", ta="center", fs="italic")
layout = dmc.Container( layout = dmc.Container(
dmc.Stack( dmc.Stack(
[ [
create_hero_section(), create_hero_section(),
create_summary_section(), create_impact_strip(),
create_tech_stack_section(), create_featured_project(),
create_projects_section(), create_intro_section(),
dmc.Divider(my="lg"),
create_availability_section(),
dmc.Space(h=40), dmc.Space(h=40),
], ],
gap="xl", gap="xl",

View File

@@ -0,0 +1,304 @@
"""Projects overview page - Hub for all portfolio projects."""
from typing import Any
import dash
import dash_mantine_components as dmc
from dash import dcc
from dash_iconify import DashIconify
dash.register_page(__name__, path="/projects", name="Projects")
# Page intro
INTRO_TEXT = (
"These are projects I've built—some professional (anonymized where needed), "
"some personal. Each one taught me something. Use the sidebar to jump directly "
"to live dashboards or explore the overviews below."
)
# Project definitions
PROJECTS: list[dict[str, Any]] = [
{
"title": "Toronto Housing Market Dashboard",
"type": "Personal Project",
"status": "Live",
"status_color": "green",
"problem": (
"Toronto's housing market moves fast, and most publicly available data "
"is either outdated, behind paywalls, or scattered across dozens of sources. "
"I wanted a single dashboard that tracked trends in real-time."
),
"built": [
"Data Pipeline: Python scraper pulling listings data, automated on schedule",
"Transformation Layer: dbt-based SQL architecture (staging -> intermediate -> marts)",
"Visualization: Interactive Plotly-Dash dashboard with filters by neighborhood, price range, property type",
"Infrastructure: PostgreSQL backend, version-controlled in Git",
],
"tech_stack": "Python, dbt, PostgreSQL, Plotly-Dash, GitHub Actions",
"learned": (
"Real estate data is messy as hell. Listings get pulled, prices change, "
"duplicates are everywhere. Building a reliable pipeline meant implementing "
'serious data quality checks and learning to embrace "good enough" over "perfect."'
),
"dashboard_link": "/toronto",
"repo_link": "https://github.com/leomiranda/personal-portfolio",
},
{
"title": "US Retail Energy Price Predictor",
"type": "Personal Project",
"status": "Coming Soon",
"status_color": "yellow",
"problem": (
"Retail energy pricing in deregulated US markets is volatile and opaque. "
"Consumers and analysts lack accessible tools to understand pricing trends "
"and forecast where rates are headed."
),
"built": [
"Data Pipeline: Automated ingestion of public pricing data across multiple US markets",
"ML Model: Price prediction using time series forecasting (ARIMA, Prophet, or similar)",
"Transformation Layer: dbt-based SQL architecture for feature engineering",
"Visualization: Interactive dashboard showing historical trends + predictions by state/market",
],
"tech_stack": "Python, Scikit-learn, dbt, PostgreSQL, Plotly-Dash",
"learned": (
"This showcases the ML side of my skillset—something the Toronto Housing "
"dashboard doesn't cover. It also leverages my domain expertise from 5+ years "
"in retail energy operations."
),
"dashboard_link": None,
"repo_link": None,
},
{
"title": "DataFlow Platform",
"type": "Professional",
"status": "Case Study Pending",
"status_color": "gray",
"problem": (
"When I joined Summitt Energy, there was no data infrastructure. "
"Reports were manual. Insights were guesswork. I was hired to fix that."
),
"built": [
"v1 (2020): Basic ETL scripts pulling Genesys Cloud data into MSSQL",
"v2 (2021): Dimensional model (star schema) with fact/dimension tables",
"v3 (2022): Python refactor with SQLAlchemy ORM, batch processing, error handling",
"v4 (2023-24): dbt-pattern SQL views (staging -> intermediate -> marts), FastAPI layer, CLI tools",
],
"tech_stack": "Python, SQLAlchemy, FastAPI, MSSQL, Power BI, Genesys Cloud API",
"impact": [
"21 tables, 1B+ rows",
"5,000+ daily transactions processed",
"40% improvement in reporting efficiency",
"30% reduction in call abandon rate",
"50% faster Average Speed to Answer",
],
"learned": (
"Building data infrastructure as a team of one forces brutal prioritization. "
"I learned to ship imperfect solutions fast, iterate based on feedback, "
"and never underestimate how long stakeholder buy-in takes."
),
"note": "This is proprietary work. A sanitized case study with architecture patterns (no proprietary data) will be published in Phase 3.",
"dashboard_link": None,
"repo_link": None,
},
{
"title": "AI-Assisted Automation (Bandit Labs)",
"type": "Consulting/Side Business",
"status": "Active",
"status_color": "blue",
"problem": (
"Small businesses don't need enterprise data platforms—they need someone "
"to eliminate the 4 hours/week they spend manually entering receipts."
),
"built": [
"Receipt Processing Automation: OCR pipeline (Tesseract, Google Vision) extracting purchase data from photos",
"Product Margin Tracker: Plotly-Dash dashboard with real-time profitability insights",
"Claude Code Plugins: MCP servers for Gitea, Wiki.js, NetBox integration",
],
"tech_stack": "Python, Tesseract, Google Vision API, Plotly-Dash, QuickBooks API",
"learned": (
"Small businesses are underserved by the data/automation industry. "
"Everyone wants to sell them enterprise software they don't need. "
"I like solving problems at a scale where the impact is immediately visible."
),
"dashboard_link": None,
"repo_link": None,
"external_link": "/lab",
"external_label": "Learn More About Bandit Labs",
},
]
def create_project_card(project: dict[str, Any]) -> dmc.Paper:
"""Create a detailed project card."""
# Build the "What I Built" list
built_items = project.get("built", [])
built_section = (
dmc.Stack(
[
dmc.Text("What I Built:", fw=600, size="sm"),
dmc.List(
[dmc.ListItem(dmc.Text(item, size="sm")) for item in built_items],
spacing="xs",
size="sm",
),
],
gap="xs",
)
if built_items
else None
)
# Build impact section for DataFlow
impact_items = project.get("impact", [])
impact_section = (
dmc.Stack(
[
dmc.Text("Impact:", fw=600, size="sm"),
dmc.Group(
[
dmc.Badge(item, variant="light", size="sm")
for item in impact_items
],
gap="xs",
),
],
gap="xs",
)
if impact_items
else None
)
# Build action buttons
buttons = []
if project.get("dashboard_link"):
buttons.append(
dcc.Link(
dmc.Button(
"View Dashboard",
variant="light",
size="sm",
leftSection=DashIconify(icon="tabler:chart-bar", width=16),
),
href=project["dashboard_link"],
)
)
if project.get("repo_link"):
buttons.append(
dmc.Anchor(
dmc.Button(
"View Repository",
variant="subtle",
size="sm",
leftSection=DashIconify(icon="tabler:brand-github", width=16),
),
href=project["repo_link"],
target="_blank",
)
)
if project.get("external_link"):
buttons.append(
dcc.Link(
dmc.Button(
project.get("external_label", "Learn More"),
variant="outline",
size="sm",
leftSection=DashIconify(icon="tabler:arrow-right", width=16),
),
href=project["external_link"],
)
)
# Handle "Coming Soon" state
if project["status"] == "Coming Soon" and not buttons:
buttons.append(
dmc.Badge("Coming Soon", variant="light", color="yellow", size="lg")
)
return dmc.Paper(
dmc.Stack(
[
# Header
dmc.Group(
[
dmc.Stack(
[
dmc.Text(project["title"], fw=600, size="lg"),
dmc.Text(project["type"], size="sm", c="dimmed"),
],
gap=0,
),
dmc.Badge(
project["status"],
color=project["status_color"],
variant="light",
size="lg",
),
],
justify="space-between",
align="flex-start",
),
# Problem
dmc.Stack(
[
dmc.Text("The Problem:", fw=600, size="sm"),
dmc.Text(project["problem"], size="sm", c="dimmed"),
],
gap="xs",
),
# What I Built
built_section,
# Impact (if exists)
impact_section,
# Tech Stack
dmc.Group(
[
dmc.Text("Tech Stack:", fw=600, size="sm"),
dmc.Text(project["tech_stack"], size="sm", c="dimmed"),
],
gap="xs",
),
# What I Learned
dmc.Stack(
[
dmc.Text("What I Learned:", fw=600, size="sm"),
dmc.Text(project["learned"], size="sm", fs="italic"),
],
gap="xs",
),
# Note (if exists)
(
dmc.Alert(
project["note"],
color="gray",
variant="light",
)
if project.get("note")
else None
),
# Action buttons
dmc.Group(buttons, gap="sm") if buttons else None,
],
gap="md",
),
p="xl",
radius="md",
withBorder=True,
)
layout = dmc.Container(
dmc.Stack(
[
dmc.Title("Projects", order=1, ta="center"),
dmc.Text(
INTRO_TEXT, size="md", c="dimmed", ta="center", maw=700, mx="auto"
),
dmc.Divider(my="lg"),
*[create_project_card(project) for project in PROJECTS],
dmc.Space(h=40),
],
gap="xl",
),
size="md",
py="xl",
)

View File

@@ -0,0 +1,362 @@
"""Resume page - Inline display with download options."""
from typing import Any
import dash
import dash_mantine_components as dmc
from dash_iconify import DashIconify
dash.register_page(__name__, path="/resume", name="Resume")
# =============================================================================
# HUMAN TASK: Upload resume content via Gitea
# Replace the placeholder content below with actual resume data.
# You can upload PDF/DOCX files to portfolio_app/assets/resume/
# =============================================================================
# Resume sections - replace with actual content
RESUME_HEADER = {
"name": "Leo Miranda",
"title": "Data Engineer & Analytics Specialist",
"location": "Toronto, ON, Canada",
"email": "leobrmi@hotmail.com",
"phone": "(416) 859-7936",
"linkedin": "linkedin.com/in/leobmiranda",
"github": "github.com/leomiranda",
}
RESUME_SUMMARY = (
"Data Engineer with 8 years of experience building enterprise analytics platforms, "
"ETL pipelines, and business intelligence solutions. Proven track record of delivering "
"40% efficiency gains through automation and data infrastructure modernization. "
"Expert in Python, SQL, and dimensional modeling with deep domain expertise in "
"contact center operations and energy retail."
)
# Experience - placeholder structure
EXPERIENCE = [
{
"title": "Senior Data Analyst / Data Engineer",
"company": "Summitt Energy",
"location": "Toronto, ON",
"period": "2019 - Present",
"highlights": [
"Built DataFlow platform from scratch: 21 tables, 1B+ rows, processing 5,000+ daily transactions",
"Achieved 40% improvement in reporting efficiency through automated ETL pipelines",
"Reduced call abandon rate by 30% via KPI framework and real-time dashboards",
"Sole data professional supporting 150+ employees across 9 markets (Canada + US)",
],
},
{
"title": "IT Project Coordinator",
"company": "Petrobras",
"location": "Rio de Janeiro, Brazil",
"period": "2015 - 2018",
"highlights": [
"Coordinated IT infrastructure projects for Fortune 500 energy company",
"Managed vendor relationships and project timelines",
"Developed reporting automation reducing manual effort by 60%",
],
},
{
"title": "Project Management Associate",
"company": "Project Management Institute",
"location": "Remote",
"period": "2014 - 2015",
"highlights": [
"Supported global project management standards development",
"CAPM and ITIL certified during this period",
],
},
]
# Skills - organized by category
SKILLS = {
"Languages": ["Python", "SQL", "R", "VBA"],
"Data Engineering": [
"ETL/ELT Pipelines",
"Dimensional Modeling",
"dbt",
"SQLAlchemy",
"FastAPI",
],
"Databases": ["PostgreSQL", "MSSQL", "Redis"],
"Visualization": ["Plotly/Dash", "Power BI", "Tableau"],
"Platforms": ["Genesys Cloud", "Five9", "Zoho CRM", "Azure DevOps"],
"Currently Learning": ["Azure DP-203", "Airflow", "Snowflake"],
}
# Education
EDUCATION = [
{
"degree": "Bachelor of Business Administration",
"school": "Universidade Federal do Rio de Janeiro",
"year": "2014",
},
]
# Certifications
CERTIFICATIONS = [
"CAPM (Certified Associate in Project Management)",
"ITIL Foundation",
"Azure DP-203 (In Progress)",
]
def create_header_section() -> dmc.Paper:
"""Create the resume header with contact info."""
return dmc.Paper(
dmc.Stack(
[
dmc.Title(RESUME_HEADER["name"], order=1, ta="center"),
dmc.Text(RESUME_HEADER["title"], size="xl", c="dimmed", ta="center"),
dmc.Divider(my="sm"),
dmc.Group(
[
dmc.Group(
[
DashIconify(icon="tabler:map-pin", width=16),
dmc.Text(RESUME_HEADER["location"], size="sm"),
],
gap="xs",
),
dmc.Group(
[
DashIconify(icon="tabler:mail", width=16),
dmc.Text(RESUME_HEADER["email"], size="sm"),
],
gap="xs",
),
dmc.Group(
[
DashIconify(icon="tabler:phone", width=16),
dmc.Text(RESUME_HEADER["phone"], size="sm"),
],
gap="xs",
),
],
justify="center",
gap="lg",
wrap="wrap",
),
dmc.Group(
[
dmc.Anchor(
dmc.Group(
[
DashIconify(icon="tabler:brand-linkedin", width=16),
dmc.Text("LinkedIn", size="sm"),
],
gap="xs",
),
href=f"https://{RESUME_HEADER['linkedin']}",
target="_blank",
),
dmc.Anchor(
dmc.Group(
[
DashIconify(icon="tabler:brand-github", width=16),
dmc.Text("GitHub", size="sm"),
],
gap="xs",
),
href=f"https://{RESUME_HEADER['github']}",
target="_blank",
),
],
justify="center",
gap="lg",
),
],
gap="sm",
),
p="xl",
radius="md",
withBorder=True,
)
def create_download_section() -> dmc.Group:
"""Create download buttons for resume files."""
# Note: Buttons disabled until files are uploaded
return dmc.Group(
[
dmc.Button(
"Download PDF",
variant="filled",
leftSection=DashIconify(icon="tabler:file-type-pdf", width=18),
disabled=True, # Enable after uploading resume.pdf to assets
),
dmc.Button(
"Download DOCX",
variant="outline",
leftSection=DashIconify(icon="tabler:file-type-docx", width=18),
disabled=True, # Enable after uploading resume.docx to assets
),
dmc.Anchor(
dmc.Button(
"View on LinkedIn",
variant="subtle",
leftSection=DashIconify(icon="tabler:brand-linkedin", width=18),
),
href=f"https://{RESUME_HEADER['linkedin']}",
target="_blank",
),
],
justify="center",
gap="md",
)
def create_summary_section() -> dmc.Paper:
"""Create the professional summary section."""
return dmc.Paper(
dmc.Stack(
[
dmc.Title("Professional Summary", order=2, size="h4"),
dmc.Text(RESUME_SUMMARY, size="md"),
],
gap="sm",
),
p="lg",
radius="md",
withBorder=True,
)
def create_experience_item(exp: dict[str, Any]) -> dmc.Stack:
"""Create a single experience entry."""
return dmc.Stack(
[
dmc.Group(
[
dmc.Text(exp["title"], fw=600),
dmc.Text(exp["period"], size="sm", c="dimmed"),
],
justify="space-between",
),
dmc.Text(f"{exp['company']} | {exp['location']}", size="sm", c="dimmed"),
dmc.List(
[dmc.ListItem(dmc.Text(h, size="sm")) for h in exp["highlights"]],
spacing="xs",
size="sm",
),
],
gap="xs",
)
def create_experience_section() -> dmc.Paper:
"""Create the experience section."""
return dmc.Paper(
dmc.Stack(
[
dmc.Title("Experience", order=2, size="h4"),
*[create_experience_item(exp) for exp in EXPERIENCE],
],
gap="lg",
),
p="lg",
radius="md",
withBorder=True,
)
def create_skills_section() -> dmc.Paper:
"""Create the skills section with badges."""
return dmc.Paper(
dmc.Stack(
[
dmc.Title("Skills", order=2, size="h4"),
dmc.SimpleGrid(
[
dmc.Stack(
[
dmc.Text(category, fw=600, size="sm"),
dmc.Group(
[
dmc.Badge(skill, variant="light", size="sm")
for skill in skills
],
gap="xs",
),
],
gap="xs",
)
for category, skills in SKILLS.items()
],
cols={"base": 1, "sm": 2},
spacing="md",
),
],
gap="md",
),
p="lg",
radius="md",
withBorder=True,
)
def create_education_section() -> dmc.Paper:
"""Create education and certifications section."""
return dmc.Paper(
dmc.Stack(
[
dmc.Title("Education & Certifications", order=2, size="h4"),
dmc.Stack(
[
dmc.Stack(
[
dmc.Text(edu["degree"], fw=600),
dmc.Text(
f"{edu['school']} | {edu['year']}",
size="sm",
c="dimmed",
),
],
gap=0,
)
for edu in EDUCATION
],
gap="sm",
),
dmc.Divider(my="sm"),
dmc.Group(
[
dmc.Badge(cert, variant="outline", size="md")
for cert in CERTIFICATIONS
],
gap="xs",
),
],
gap="md",
),
p="lg",
radius="md",
withBorder=True,
)
layout = dmc.Container(
dmc.Stack(
[
create_header_section(),
create_download_section(),
dmc.Alert(
"Resume files (PDF/DOCX) will be available for download once uploaded. "
"The inline content below is a preview.",
title="Downloads Coming Soon",
color="blue",
variant="light",
),
create_summary_section(),
create_experience_section(),
create_skills_section(),
create_education_section(),
dmc.Space(h=40),
],
gap="lg",
),
size="md",
py="xl",
)

View File

@@ -0,0 +1,9 @@
"""Utility modules for the portfolio app."""
from portfolio_app.utils.markdown_loader import (
get_all_articles,
get_article,
render_markdown,
)
__all__ = ["get_all_articles", "get_article", "render_markdown"]

View File

@@ -0,0 +1,109 @@
"""Markdown article loader with frontmatter support."""
from pathlib import Path
from typing import TypedDict
import frontmatter
import markdown
from markdown.extensions.codehilite import CodeHiliteExtension
from markdown.extensions.fenced_code import FencedCodeExtension
from markdown.extensions.tables import TableExtension
from markdown.extensions.toc import TocExtension
# Content directory (relative to this file's package)
CONTENT_DIR = Path(__file__).parent.parent / "content" / "blog"
class ArticleMeta(TypedDict):
"""Article metadata from frontmatter."""
slug: str
title: str
date: str
description: str
tags: list[str]
status: str # "published" or "draft"
class Article(TypedDict):
"""Full article with metadata and content."""
meta: ArticleMeta
content: str
html: str
def render_markdown(content: str) -> str:
"""Convert markdown to HTML with syntax highlighting.
Args:
content: Raw markdown string.
Returns:
HTML string with syntax-highlighted code blocks.
"""
md = markdown.Markdown(
extensions=[
FencedCodeExtension(),
CodeHiliteExtension(css_class="highlight", guess_lang=False),
TableExtension(),
TocExtension(permalink=True),
"nl2br",
]
)
return str(md.convert(content))
def get_article(slug: str) -> Article | None:
"""Load a single article by slug.
Args:
slug: Article slug (filename without .md extension).
Returns:
Article dict or None if not found.
"""
filepath = CONTENT_DIR / f"{slug}.md"
if not filepath.exists():
return None
post = frontmatter.load(filepath)
meta: ArticleMeta = {
"slug": slug,
"title": post.get("title", slug.replace("-", " ").title()),
"date": str(post.get("date", "")),
"description": post.get("description", ""),
"tags": post.get("tags", []),
"status": post.get("status", "published"),
}
return {
"meta": meta,
"content": post.content,
"html": render_markdown(post.content),
}
def get_all_articles(include_drafts: bool = False) -> list[Article]:
"""Load all articles from the content directory.
Args:
include_drafts: If True, include articles with status="draft".
Returns:
List of articles sorted by date (newest first).
"""
if not CONTENT_DIR.exists():
return []
articles: list[Article] = []
for filepath in CONTENT_DIR.glob("*.md"):
slug = filepath.stem
article = get_article(slug)
if article and (include_drafts or article["meta"]["status"] == "published"):
articles.append(article)
# Sort by date descending
articles.sort(key=lambda a: a["meta"]["date"], reverse=True)
return articles

View File

@@ -48,6 +48,11 @@ dependencies = [
# Utilities # Utilities
"python-dotenv>=1.0", "python-dotenv>=1.0",
"httpx>=0.28", "httpx>=0.28",
# Blog/Markdown
"python-frontmatter>=1.1",
"markdown>=3.5",
"pygments>=2.17",
] ]
[project.optional-dependencies] [project.optional-dependencies]
@@ -148,5 +153,7 @@ module = [
"pdfplumber.*", "pdfplumber.*",
"tabula.*", "tabula.*",
"pydantic_settings.*", "pydantic_settings.*",
"frontmatter.*",
"markdown.*",
] ]
ignore_missing_imports = true ignore_missing_imports = true