Building a Real-Time Flood Monitoring Dashboard for Midleton

March 29, 2026

Problem
Local flood awareness depends on fragmented live sources.
Build
A real-time flood-monitoring dashboard for Midleton.
Stack
OPW, Met Eireann, Marine Institute, PostGIS/Postgres, API, and Leaflet.
Outcome
One situational view that improves local awareness during risk periods.

Flooding in Ireland is becoming more volatile, and Midleton has a recent history that makes real-time local awareness matter. When heavy rainfall, river levels, tides, and official warnings all shift within hours, a resident or local observer needs one operational picture rather than four browser tabs.

That was the core monitoring problem here: the data already exists, but it is fragmented across separate public sources with different update cycles, formats, and geographic assumptions.

The Midleton dashboard brings those live feeds into one place. It shows current river conditions, rainfall context, warnings, and tide information in a single situational view designed for faster interpretation during risk periods.

Problem

Midleton sits at the confluence of the Owennacurra and Dungourney rivers and has a documented history of flooding. During rainy periods, users need to know more than a raw gauge number. They need to know whether river levels are rising toward unusual territory, whether heavy rain is forecast, whether warnings are active, and whether tides may reduce downstream discharge.

Those data streams are publicly available in Ireland, but they are spread across different agency sites and APIs. Users end up switching sources at exactly the point where speed and clarity matter most.

Like Mapping Ardmore in the 1926 Census, this project began with the same place-first instinct: take scattered data and rebuild it around one local area.

What I Built

I built a real-time flood-monitoring dashboard for Midleton that aggregates data from four Irish public sources and refreshes the key operational inputs every 15 minutes:

  • OPW Water Level Stations for the Ballyedmond and Townparks gauges
  • Met Eireann weather warnings
  • Met Eireann precipitation forecasts for Midleton's coordinates
  • Marine Institute tide predictions for nearby coastal stations

The dashboard combines those feeds into a single local status view without pretending to make deterministic flood predictions. Instead, it gives users clearer context around the current reading: gauge level, freshness, warning state, rainfall outlook, and tidal timing.

On the frontend, those results are presented through a map, time-series charts, benchmark comparisons, and supporting detail panels. The aim is practical situational awareness, not just a feed aggregator.

Outcome and Practical Usefulness

A single page now replaces the usual sequence of gauge checks, warning pages, forecast lookups, and tide tables. Midleton's current conditions are faster to interpret because the relevant context sits together.

The dashboard improves local awareness in four ways:

  • faster local interpretation of whether conditions look ordinary or unusual
  • less source-switching during rain and flood-risk periods
  • clearer context around the interaction between gauges, warnings, rainfall, and tide timing
  • graceful degradation when one data source is stale or temporarily offline

That last point matters. Flood dashboards are most valuable when conditions are messy, and those are exactly the moments when one upstream feed may fail. The system is designed to remain informative even during partial outages.

Data Sources

All data comes from open Irish public sources.

  • The Office of Public Works operates the water level monitoring network at waterlevel.ie.
  • Met Eireann provides both weather warnings and precipitation forecasts through its open APIs.
  • The Marine Institute supplies tide predictions.

These are read-only public sources and the polling logic respects upstream constraints, especially the OPW request not to poll more than once every 15 minutes.

Technical Approach

Architecture Summary

The codebase is a monorepo with two main parts: a Node.js API backed by PostgreSQL/PostGIS, and a Vite plus TypeScript frontend.

The API exposes endpoints such as /api/status for current conditions and /api/benchmarks/* for historical context. Scheduled pollers fetch fresh upstream data on fixed intervals, validate it, and store it in Postgres. Heavy benchmark calculations run offline rather than on every request, which keeps the live endpoints fast and cacheable.

The frontend is a single-page application built for quick rendering during high-interest periods. Leaflet handles the spatial display, charts show recent river behaviour, and the layout is designed to remain usable on mobile when people are likely checking conditions away from a desk.

GIS and Spatial Integration

Flood monitoring is inherently spatial. The dashboard has to decide which gauges matter for Midleton, how rainfall should be sampled geographically, and which coastal tide stations provide the most useful downstream context.

PostGIS handles those location-aware questions. It supports geographic queries such as pulling forecasts for Midleton's coordinates, working with station geometry, and setting up future watershed-aware comparisons between upstream and downstream conditions. Spatial relevance becomes part of the monitoring logic rather than a layer added at the end.

For another place-first project with very different data characteristics, Mapping Ardmore in the 1926 Census shows how the same design instinct works when the inputs are archival and static rather than live and operational.

Technical Challenges and Solutions

The main implementation challenges were operational rather than visual:

  • handling partial data outages without leaving the dashboard empty
  • normalising JSON, GeoJSON, and XML feeds into one internal model
  • respecting upstream polling limits while keeping updates timely
  • turning raw river levels into benchmarks that users can interpret

Across all four challenges, the same priorities kept recurring: validate aggressively, precompute what can be precomputed, and expose freshness and provenance in the UI.

Implementation Details

The sections below are optional technical depth. They document the operational choices behind the dashboard, but they are not required to understand what the product does.

Challenge 1: Handling Partial Data Outages

Sometimes a river gauge goes offline for maintenance. Solution: freshness thresholds. If the primary station (Ballyedmond) has not reported in 60 minutes, the system falls back to the secondary station (Townparks) with a visible stale-data explanation in the UI.

Technical Deep Dive: Failover Logic and Data Staleness

The buildStatus() function in routes/api.ts implements the fallback logic:

// Pseudocode of the failover strategy
function getPrimaryRiverLevel(stations) {
  const primary = stations.ballyedmond
  const secondary = stations.townparks
  const FRESHNESS_THRESHOLD = 60 * 60 * 1000 // 60 minutes in ms

  if (primary.isFresh(FRESHNESS_THRESHOLD)) {
    return {
      value: primary.level,
      station: "ballyedmond",
      stale: false,
      asOf: primary.timestamp,
    }
  }

  if (secondary.isFresh(FRESHNESS_THRESHOLD)) {
    return {
      value: secondary.level,
      station: "townparks",
      stale: false,
      asOf: secondary.timestamp,
      note: "Primary station offline; using secondary",
    }
  }

  // Both stale, show last known value with prominent warning
  return {
    value: primary.lastKnownLevel,
    station: "ballyedmond",
    stale: true,
    asOf: primary.lastUpdate,
    staleSinceMinutes: (Date.now() - primary.lastUpdate) / 60000,
  }
}

Why this matters: the API always returns a headline number for the main status card, but the UI can still show asOf, stale, and station metadata so users understand the data quality behind it.

Database schema:

CREATE TABLE river_levels (
  id SERIAL PRIMARY KEY,
  station_id INTEGER NOT NULL,
  level_m NUMERIC(8,3),
  timestamp TIMESTAMP NOT NULL,
  recorded_at TIMESTAMP DEFAULT NOW(),
  FOREIGN KEY (station_id) REFERENCES stations(id)
);

CREATE INDEX idx_river_levels_station_recent
ON river_levels(station_id, timestamp DESC);

This index keeps freshness checks effectively constant-time.

Challenge 2: Different Data Formats

Met Eireann returns JSON, OPW returns GeoJSON, and the Marine Institute returns XML. Solution: adapter modules that parse, validate, and transform each source into consistent internal types.

Technical Deep Dive: Adapter Pattern and Zod Validation

Each external API adapter has two jobs:

  1. fetch and parse the remote response
  2. validate and transform it into an internal type using Zod

Example: the OPW adapter (adapters/opw.ts):

import { z } from "zod"
import { fetchJson } from "../utils/fetch"

const OPWFeatureSchema = z.object({
  properties: z.object({
    STATION_ID: z.string(),
    STATION_NAME: z.string(),
    VALUE: z.number(),
    DATE_TIME: z.string().datetime(),
    UNITS: z.literal("m"),
  }),
  geometry: z.object({
    type: z.literal("Point"),
    coordinates: z.tuple([z.number(), z.number()]),
  }),
})

const OPWResponseSchema = z.object({
  type: z.literal("FeatureCollection"),
  features: z.array(OPWFeatureSchema),
})

export async function fetchRiverLevels() {
  const response = await fetchJson("http://waterlevel.ie/geojson/latest/")
  const validated = OPWResponseSchema.parse(response)

  return validated.features.map(feature => ({
    stationId: feature.properties.STATION_ID,
    levelMeters: feature.properties.VALUE,
    timestamp: new Date(feature.properties.DATE_TIME),
    source: "OPW",
  }))
}

The Met Eireann adapter has a different schema and transform path, but the same pattern:

const MetEireannWarningSchema = z.object({
  type: z.enum(["Wind", "Rain", "Temperature"]),
  severity: z.enum(["Yellow", "Orange", "Red"]),
  issued: z.string().datetime(),
  onset: z.string().datetime(),
  expiry: z.string().datetime(),
  description: z.string(),
})

export async function fetchWeatherWarnings(countyCode: string) {
  const response = await fetchJson(`https://www.met.ie/Open_Data/json/warning_${countyCode}.json`)

  const warnings = z.array(MetEireannWarningSchema).parse(response)

  return warnings.map(w => ({
    type: w.type,
    severity: w.severity,
    issued: new Date(w.issued),
    description: w.description,
    source: "MetEireann",
  }))
}

Benefits:

  • type-safe validation when upstream APIs change
  • one predictable transform point for each source
  • easier testing with mocked adapter responses
  • straightforward extension when a new feed needs to be added

Challenge 3: Respecting Upstream Rate Limits

OPW explicitly requests no more than one request per 15 minutes. Solution: hard polling schedules via node-cron, with staggered jobs and cached responses.

Technical Deep Dive: Polling Orchestration and Caching

The polling system in pollers/pollers.ts uses fixed schedules:

import cron from "node-cron"
import * as db from "../db"
import * as adapters from "../adapters"

export function startPollers(app: Express) {
  cron.schedule("*/15 * * * *", async () => {
    console.log("[POLLER] Fetching river levels...")
    try {
      const levels = await adapters.opw.fetchRiverLevels()
      await db.river_levels.insertBatch(levels)
      await db.river_levels.pruneOlderThan(30)
    } catch (err) {
      logger.error("River level poller failed", { error: err })
    }
  })

  cron.schedule("*/30 * * * *", async () => {
    const warnings = await adapters.metEireann.fetchWarnings("EI04")
    await db.weather_warnings.upsertBatch(warnings)
  })

  cron.schedule("*/15 * * * *", async () => {
    const forecast = await adapters.precipitation.fetch(52.0, -8.5)
    await db.precipitation_forecasts.upsert(forecast)
  })

  cron.schedule("0 3 * * *", async () => {
    const tides = await adapters.tide.fetchMonthAhead()
    await db.tide_observations.upsertBatch(tides)
  })
}

Caching happens at two levels:

  1. the latest successful fetch is always stored in Postgres
  2. the API serves cacheable /api/status responses within a 15-minute window

This keeps the system deterministic, upstream-friendly, and fault-tolerant. The trade-off is that the live API may be up to 15 minutes stale, which is acceptable for this monitoring use case.

Challenge 4: Computing Meaningful Benchmarks

Raw numbers such as "4.2 metres" are not useful on their own. Solution: historical percentiles and anomalies that show whether a current level is ordinary, elevated, or unusual for that date.

Technical Deep Dive: Percentile Computation and Historical Aggregation

Benchmarks are precomputed offline via a scheduled job (jobs/computeBenchmarks.ts):

export async function computeRiverBenchmarks(stationId: number) {
  const allReadings = await db.river_levels.getAllByStation(stationId)

  if (allReadings.length < 100) {
    return { available: false, reason: "Insufficient historical data" }
  }

  const byMonthDay = groupByMonthDay(allReadings)
  const benchmarks = {}

  for (const [monthDay, readings] of Object.entries(byMonthDay)) {
    const sorted = readings.map(r => r.level_m).sort((a, b) => a - b)

    benchmarks[monthDay] = {
      p25: percentile(sorted, 25),
      p50: percentile(sorted, 50),
      p75: percentile(sorted, 75),
      p90: percentile(sorted, 90),
      ath: Math.max(...sorted),
      min: Math.min(...sorted),
      sampleCount: sorted.length,
      asOf: new Date(),
    }
  }

  await db.derived_benchmarks.upsert(stationId, benchmarks)
}

function percentile(sorted, p) {
  const index = (p / 100) * sorted.length
  const lower = sorted[Math.floor(index)]
  const upper = sorted[Math.ceil(index)]
  return lower + (upper - lower) * (index % 1)
}

Why month-day context matters:

  • March baselines are different from August baselines
  • a value above the P90 for that calendar day is genuinely unusual
  • the same absolute river height can mean different things in different seasons

Example API output:

{
  "station": "Ballyedmond",
  "current": {
    "level": 4.2,
    "timestamp": "2026-03-29T14:30:00Z"
  },
  "benchmark": {
    "forDate": "2026-03-29",
    "p25": 1.8,
    "p50": 2.1,
    "p75": 2.8,
    "p90": 3.5,
    "ath": 4.67,
    "sampleCount": 15,
    "currentVsP90": "above",
    "description": "Current level is above the 90th percentile for March 29"
  }
}

Possible future PostGIS work includes watershed-aware alerts by comparing nearby upstream stations spatially rather than evaluating each gauge in isolation.

Related Work and Next Steps

The dashboard is live at midleton-flood-dash.fly.dev. Immediate follow-on work includes tide surge visualisation, further review of the benchmark thresholds, and better automated historical backfill on new deployments.

This project also clarified something broader about GIS work: a useful public-facing system often depends less on flashy mapping than on careful integration, validation, and explanation across multiple spatially uneven sources.

If you want to see the same place-first approach applied to static historical records instead of live operational feeds, Mapping Ardmore in the 1926 Census is the clearest companion project. It solves a similar local-exploration problem with completely different data characteristics.


Profile picture

by Dónal O'Tiarnaigh working on personal GIS, spatial analysis, and data visualisation projects from the south of Ireland.