Monthly Archives: September 2007

>On Ruby on Rails with PostgreSQL, and Acts as Paranoid


Back a few years ago I was researching differences between PostgreSQL and MySQL databases, and chose PostgreSQL because at the time it supported foreign key constraints and many other fantastic SQL extensions that make developer’s life a lot easier. Today I am sure MySQL is just as functional as PostgreSQL, and it does appear to be a more popular choice as the Rails DB than MySQL. I still prefer PostgreSQL, it just feels more natural to me coming out of Oracle background, so I am probably biased (who isn’t?) Anyway, in the last year and a half or so, I’ve been writing Rails apps that use PG-based databases and found a few random tricks I’d love to share here. Opinions differ here, but just like accountants like the ancient double entry accounting system, I personally prefer a double validation system – where Rails validates my objects before/after updates, but the database double checks this using proper constraints. Rail’s validation system is very robust and extensive, and it is its power. However nobody is protected from human error, and it’s pretty easy to forget certain validations, some validations may be performed conditionally, and in general rails validations are at a higher level. I’ve certainly missed a validation or two in the past myself, or had a bug in my condition. So as a rule, I specifically like to use in all of my migrations:

  • Foreign key constraints, with “ON DELETE CASCADE” to allow tests to drop/remove fixture data
  • Check constraints to verify values satisfy certain conditions. Use these to validate finite value columns, such as state/status, gender, inheritance discriminator column, or polymorphic table’s type discriminator. Use it to validate pattern based columns, eg if you decide to store SSN as a string of the form XXX-XX-XXXX, a check constraint can ensure it really is.

Foreign Key Constraints

Let’s start with foreign key constraints. They are extremely useful in ensuring your relationships work and do not point to non-existing rows. It is even more important if you bypass ActiveRecord and use direct SQL in certain cases to insert data (as this may be the case with some batched or high-volume data loads). I use foreign keys on ALL of my tables, and haven’t had much trouble with that. The only issue to watch out for, is the order in which your fixtures are loaded. Basically you’d want to load your independent tables first, such as countries/states – those that do not depend on anything, and then load fixtures that depend on them in the order of dependence.

Please see the following post for an opposite opinion, and note that I don’t agree with author’s suggestion not to use foreign keys – read first comment, I too was saved on multiple occasions by having them, especially in the development (since that’s where you’ll be finding most bugs in the first place).

Here’s how to add a foreign key constraint in your migration:

create_table :profiles do |t|
   t.column :user_id,    :integer, :null => false
execute "ALTER TABLE profiles 
          ADD CONSTRAINT profiles_fk_user_id
          FOREIGN KEY (user_id) 
          REFERENCES users (id) 

Notice how all constraints are named. This is very important, because you want to be able to change/update constraints in the future (especially the check constraints – see below). Not giving a constraint a name explicitly forces DB to come up with an auto-generated name, which is not very useful. Any future migration that needs to change this constraint would be at loss as to how to reference it reliably. Using a proper naming convention is also a good idea: table_fk_field is common naming practice for foreign key constraints.

Check Constraints

Check constraints verify that values in a column satisfy provided conditions. They are extremely useful in ensuring you don’t get “^%$$$” as your gender value for user Bob, or “Frak!” as a status for your order. Database can be manipulated in many ways, and in my experience unless you protect your columns some weird stuff always ends up in there, whatever the greatest validation framework sits in front of it. In this hypothetical example, we need to ensure that the gender column on our frogs table only allows “M”, “F” or “T” as possible gender values (I live in San Francisco, yo!). So in the migration file, specify check constraint similar to how we did this for foreign key constraints:

create_table :frogs do |t|
   t.column :name, :string
   t.column :gender, :string,   :limit => 1, :null => false

execute "ALTER TABLE frogs 
          ADD CONSTRAINT frogs_check_gender 
          CHECK (gender IN ('M', 'F', 'T'))"

Please see the following link for more information on PostgreSQL powerful constraints syntax:
PostgreSQL Constraints

Partial Indexes, and Acts as Paranoid

If you had to install acts_as_paranoid plugin, you’d notice that it adds deleted_at column which is going to be NULL for all values that are active in your database. Calling destroy will not physically delete the record, but would merely set deleted_at to the current time stamp. By declaring a model as “acts_as_paranoid” it will also add the following to all queries: and (deleted_at is NULL or deleted_at > ?) First of all, the “deleted_at > ?” clause is meant for allowing models to expire in the future, but I haven’t met a single person who uses this feature. So the first thing I do when I install this plugin is I modify it’s source to remove this comparison. I want my find statements generate the following: and (deleted_at is NULL) Ok, so things are great now, but suddenly I realize that my favorite index on users table and email field, does not really work as well! But of course… I am now querying on both “email = ? and deleted_at is null”. So how do you add deleted_at to the index? Here’s how:

execute "CREATE INDEX users_idx_email 
            ON users (email, deleted_at) 
            WHERE deleted_at IS NULL"

This uses a cool feature of PostgreSQL called partial index to create an index on subset of values.

Comments and thoughts/suggestions are always welcome! References:


Posted by on September 25, 2007 in Ruby on Rails, Technology


On Ruby on Rails with PostgreSQL, and Acts as Paranoid

This post has moved:

On Ruby on Rails with PostgreSQL, and Acts as Paranoid


Posted by on September 25, 2007 in Ruby on Rails, Technology


Ruby on Rails: Reducing clutter in actions by placing common code in filters

This post has moved:

Ruby on Rails: Reducing clutter in actions by placing common code in filters

1 Comment

Posted by on September 7, 2007 in Ruby on Rails, Technology


>Ruby on Rails: Reducing clutter in actions by placing common code in filters


This is a tiny but useful tip, that saved many lines of repeated code in my controllers, hence why not share it 🙂

Do Not Repeat Thyself?

If you’ve looked at the controller code that’s generated by scaffolding, you’ll find something like this:

  def show
    @user = User.find(params[:id])

  def new
    @user =

  def destroy 

Sure, in this case all we are doing repeatedly is instantiating the user by a potentially available parameter value. What if instead we placed these common fetches in a controller filter, which would simply set an instance variable for us? Hell, we could even handle exceptions (such as invalid ID) in only one place this way! What not to like.

When dealing with a more complicated route that has been defined, this instantiation may become quite a bit more elaborate, and the case for a filter is even more justified.

Consider the case of building a collaboration system where you have projects and individual contributions under that project, as well as a producer of the project. We might want to support all project operations under a URL that looks kind of like this:


with a corresponding route in the config/routes.rb file:

# project management route
map.connect '/user/:username/project/:project_name/contribution/:action/:id',
    :controller     => "contribution"

Based on the route defined, Rails would create params[:username], params[:project_name] and params[:id] for contribution id, in addition to the standard action/controller pair.

Now imagine that the controller we are writing has many actions, such as add, edit, view, list, append, preview, post, comment, etc. All of them could use a handle on the project, it’s producer and contribution instances. Ideally – in @project, @producer_user and @contribution respectively.

If we followed the scaffolding example, we’d simply add appropriate lookups at the beginning of each action. But that’s a lot of repeated code!

So let’s use filters instead, to get our common lookups under control.

class ContributeController < ApplicationController
  before_filter :setup_project

  def home
     # do stuff


  def setup_project
    @producer_user = 
    @project = 
      @producer_user.produced_projects.detect do |p|[:project_name].downcase)
    @contribution = 

    return true

  rescue Exception => e "can't show project:", e
    flash[:error] = e.message if e
    redirect_to :controller => "home" and return

This is short and elegant, and now every action in this controller (which is not excluded for setup_project filter) will have access to our instance variables. Groovy!


Of course nothing is free, and in this case we are potentially loosing performance. Perhaps some actions don’t need to know the @producer_user. We’ll be fetching it all the time, which saves time for us – developers. If performance problems occur because of extra unused fetches the filter can be broken up or optimized. Until then – it sure saves me a headache.

That’s it!

Thoughts, comments?


Posted by on September 7, 2007 in Ruby on Rails, Technology