Alerts
Usage-based pricing invites surprises. A customer might accidentally leave a job running and run up a bill they did not mean to. A free-tier account might cross the boundary where they start owing money, and you want to nudge them toward a paid plan before they hit a harder cap. Alerts are the mechanism Unimeter offers for noticing these moments as they happen.
How an alert works
Section titled “How an alert works”When you register a metric, you can attach one or more thresholds to it. A threshold is a named number and a flag that says whether it fires repeatedly or only once. If the running total for any customer crosses that number, Unimeter flips a bit in that customer’s aggregate to say so and writes a record to a durable log on disk.
Every event ingestion checks the updated total against every threshold on the metric. If the new total is at or above a threshold value and the corresponding bit was not already set, or if the threshold is set to recur, Unimeter records a crossing. If a later event pushes the total back below the threshold, the bit is cleared so that a future recrossing is recorded again.
Nothing is sent out of Unimeter automatically. There is no webhook today, no email, no push notification. What you get is state inside Unimeter that your application can read and act on. This is a deliberate design choice. By making alerts a queryable state rather than a push event, Unimeter stays simple, and your application controls exactly how to react and when.
A worked example
Section titled “A worked example”Imagine a free-tier SaaS where each workspace gets ten thousand API calls per month free, then pays per call above that, and is cut off at one hundred thousand to prevent runaway bills. You define the api_calls metric with count aggregation, a monthly period, and two thresholds. The first is named free_tier_exceeded at value 10000, non-recurring, so it fires the moment a customer first goes over their free allowance. The second is named hard_cap at value 100000, also non-recurring, so it fires the moment a customer hits the cap.
When a customer crosses 10000 calls, their aggregate’s alert flags gain a bit. Your billing system polls Unimeter on a timer or checks the flags during the next query, and when it notices the free_tier_exceeded bit is set, it sends a friendly email offering to upgrade. When the customer crosses 100000 calls, the hard_cap bit is set and your ingress layer can refuse further calls until the next period begins.
Recurring versus non-recurring
Section titled “Recurring versus non-recurring”Most thresholds are non-recurring. You want to notice the first time a customer enters a state and react to that event, not get pinged every time another event arrives while they are still in the state. Non-recurring thresholds fire once per period, and reset only if the customer’s total drops below the line and then climbs back up.
Recurring thresholds are for the case where you want ongoing confirmation. If a customer is continuously over their quota, a recurring threshold emits a new log entry with every event that keeps them above the line. This is less common but sometimes useful for accounting purposes, when you want a precise record of each event that occurred while the customer was in breach.
The alert log
Section titled “The alert log”Unimeter writes every threshold crossing to a durable append-only file called the alert log. Each entry records which account crossed which threshold on which metric, the value at the moment of crossing, and the timestamp. Your application can read the log from an offset you remember, process entries you have not yet seen, and remember where you stopped. This is how you build reliable alerting on top of Unimeter even when your consumer restarts.
The log is local to the node that ingested the event. In a multi-node cluster you read from each node and merge, or you consolidate in your own downstream system. A future release will expose the log over the protocol directly, but today it is accessible through the query API on the node that wrote it.
Thresholds and filters
Section titled “Thresholds and filters”Thresholds are set on the overall total for a metric, not on per-dimension slices. If you want to alert on a specific dimension value, like warning when AWS compute alone crosses a number, the current workaround is to create a separate metric for that slice. Per-dimension thresholds are a natural extension and are planned for a future release.
Live delivery to your application
Section titled “Live delivery to your application”Alerts are delivered to your code through a Go channel. When you subscribe, the Go client opens a live push stream with every node in the cluster, catches up any alerts that happened while your process was offline, and then forwards every new crossing to the channel in real time. This is covered in detail in Alert subscriptions.
For batch use cases where a small delay is acceptable, you can also poll client.ListAlerts on a timer and consume the log at whatever cadence suits you. The two approaches read from the same underlying durable log, so the data is identical; the only difference is how quickly you learn about new crossings.
What comes next
Section titled “What comes next”See Define metrics for the syntax that attaches thresholds to a metric in your Go code. Once alerts are firing, learn how to consume them in real time with Alert subscriptions.