This is the second post in my series Notes on Database Design, where I introduce a simple database schema to support selling event tickets for my fantasy company, TicketDoctor.
Let’s run with a very simple database schema we’ll use through the rest of this series to implement examples, because what’s a series of database related posts without a mock schema to work with? Anyway, here’s very simple audit
table
|
|
This is used to record who and when records are added. The use of the DEFAULT
clauses provide the data we need. By issuing separate database users for each interface or even, each user accessing your database through your client applications, you can keep a detailed log of who is doing what.
Having this information inescapably captured by the database has helped me more than once with identifying a misbehaving piece of software adding bad data.
Depending on how much you trust your client applications, you could go one step further and have a TRIGGER
forcibly setting these fields. This negates the possibility of the application lying about who’s adding a record, but you would have to write and apply the trigger to the required tables. I might write about ways to automate this at some point.
The INHERITS
part of CREATE TABLE
in PostgreSQL adds the audit
table to the rest of the important tables.
|
|
This goes beyond simply copying column definitions at CREATE TABLE
time. When you modify parent tables, the inherited tables, well, inherit these new columns. This is a very powerful feature to keep in mind because it helps with table structure consistency as your application changes.
Indexes as integrity constraints
You’ll run into your fair share of database developers that are actually surprised by the notion of using database indexes to enforce integrity constraints. Some of them will vehemently defend writing these constraints in user code, sometimes in multiple places, because of reasons. And nobody ever makes mistakes loading data, so everything is always ok.
Actually loading data is a good example to show integrity constraints in action. With this schema, opening a new concert to the public would involve loading a venue, the show and the tickets for sale. Tickets are naturally defined as a seat (seat_number
) in a specific show (show_id
). For now, because this is something we’ll want to change to provide for faster searches.
Now, think about what would happen in the real world if this statement were to be run, probably as part of loading the thousands of tickets for a specific concert:
|
|
Selling two tickets for the same seat would likely not be a good user experience, so we would want to prevent duplicate tickets. This can be trivially achieved using a UNIQUE INDEX
:
|
|
Now, attempting to add conflicting seats with the INSERT
above results in an exception:
ERROR: duplicate key value violates unique constraint "same_ticket_idx"
DETAIL: Key (show_id, seat_number)=(1, Section 140, Row 1, Seat 21) already exists.
Offending data did not make it to the database!
The beauty of this is that client code does not need to add specific checks to detect conflicting seats. All that’s required is trapping the exception. In fact, by looking at the error itself ("duplicate key value...
"), the application can provide a friendlier error to its caller.
In terms of performance, this is faster than looking for a duplicate seat via SELECT
, with the client application becoming simpler due to not needing so much validation code. The database can quickly look at the index and determine whether the current record is a duplicate — violating the UNIQUE
constraint imposed by the index — and abort the transaction. This happens without the client having to pull a single row through the network, hence the gain in speed.
The overall system also becomes more robust, as the whole operation is transactional in nature. Groups of seats can be managed within a single BEGIN TRANSACTION; ⋯ COMMIT;
block, so either all of them can be acted upon or the seats are left as they were.
In the long run, this approach provides a simpler environment: All checks are executing automatically, close to the data. Errors in application code cannot insert invalid data. If proper integrity constraints are in place, it becomes impossible for the applications to violate the business rules deployed as database constraints. This results in higher degrees of data quality.