Apply a
Constraint (in traditional database terminology) to a HTable.
Any number of
Constraint can be added to the table, in
any order.
A
Constraint must be added to a table before the table is loaded via
Constraints#add(org.apache.hadoop.hbase.HTableDescriptor,Class[]) or
Constraints#add(org.apache.hadoop.hbase.HTableDescriptor,org.apache.hadoop.hbase.util.Pair...)(if you want to add a configuration with the
Constraint). Constraints
will be run in the order that they are added. Further, a Constraint will be
configured before it is run (on load).
See
Constraints#enableConstraint(org.apache.hadoop.hbase.HTableDescriptor,Class) and
Constraints#disableConstraint(org.apache.hadoop.hbase.HTableDescriptor,Class) for
enabling/disabling of a given
Constraint after it has been added.
If a
Put is invalid, the Constraint should throw some sort of
org.apache.hadoop.hbase.constraint.ConstraintException, indicating
that the
Put has failed. When
this exception is thrown, not further retries of the
Put are
attempted nor are any other
Constraint attempted (the
Put is clearly not valid). Therefore, there are performance
implications in the order in which
BaseConstraint are
specified.
If a
Constraint fails to fail the
Put via a
org.apache.hadoop.hbase.constraint.ConstraintException, but instead
throws a
RuntimeException,
the entire constraint processing mechanism (
ConstraintProcessor) will
be unloaded from the table. This ensures that the region server is still
functional, but no more
Put will be checked via
Constraint.
Further,
Constraint should probably not be used to
enforce cross-table references as it will cause tremendous write slowdowns,
but it is possible.
NOTE: Implementing classes must have a nullary (no-args) constructor