An
IncrementalJob that consumes partitioned input data and produces
output data having the same partitions.
Typically this is used in conjunction with
AbstractPartitionCollapsingIncrementalJobwhen computing aggregates over sliding windows. A partition-preserving job can perform
initial aggregation per-day, which can then be consumed by a partition-collapsing job to
produce the final aggregates over the time window.
Only Avro is supported for the input, intermediate, and output data.
Implementations of this class must provide key, intermediate value, and output value schemas.
The key and intermediate value schemas define the output for the mapper and combiner.
The key and output value schemas define the output for the reducer.
These are defined by overriding
#getKeySchema(),
#getIntermediateValueSchema(),
and
#getOutputValueSchema().
Implementations must also provide a mapper by overriding
#getMapper() and an accumulator
for the reducer by overriding
#getReducerAccumulator(). An optional combiner may be
provided by overriding
#getCombinerAccumulator(). For the combiner to be used
the property use.combiner must also be set to true.
The distinguishing feature this type of job is that the input partitioning is preserved in the ouput.
The data from each partition is processed independently of other partitions and then output separately.
For example, input that is partitioned by day can be aggregated by day and then output by day.
This is achieved by attaching a long value to each key, which represents the partition, so that the reducer
receives data grouped by the key and partition together. Multiple outputs are then used so that the output
will have the same partitions as the input.
The input path can be provided either through the property input.path
or by calling
#setInputPaths(List). If multiple input paths are provided then
this implicitly means a join is to be performed. Multiple input paths can be provided via
properties by prefixing each with input.path., such as input.path.first
and input.path.second.
Input data must be partitioned by day according to the naming convention yyyy/MM/dd.
The output path can be provided either through the property output.path
or by calling
#setOutputPath(Path).
Output data will be written using the same naming convention as the input, namely yyyy/MM/dd, where the date used
to format the output path is the same the date for the input it was derived from.
For example, if the desired time range to process is 2013/01/01 through 2013/01/14,
then the output will be named 2013/01/01 through 2013/01/14.
By default the job will fail if any input data in the desired time window is missing. This can be overriden by setting
fail.on.missing to false.
The job will not process input for which a corresponding output already exists. For example, if the desired date
range is 2013/01/01 through 2013/01/14 and the outputs 2013/01/01 through 2013/01/12 exist, then only
2013/01/13 and 2013/01/14 will be processed and only 2013/01/13 and 2013/01/14 will be produced.
The number of paths in the output to retain can be configured through the property retention.count,
or by calling
#setRetentionCount(Integer). When this property is set only the latest paths in the output
will be kept; the remainder will be removed. By default there is no retention count set so all output paths are kept.
The inputs to process can be controlled by defining a desired date range. By default the job will process all input
data available. To limit the number of days of input to process one can set the property num.days
or call
#setNumDays(Integer). This would define a processing window with the same number of days,
where the end date of the window is the latest available input and the start date is num.days ago.
Only inputs within this window would be processed.
Because the end date is the same as the latest available input, as new input data becomes available the end of the
window will advance forward to include it. The end date can be adjusted backwards relative to the latest input
through the property days.ago, or by calling
#setDaysAgo(Integer). This subtracts as many days
from the latest available input date to determine the end date. The start date or end date can also be fixed
by setting the properties start.date or end.date, or by calling
#setStartDate(Date)or
#setEndDate(Date).
The number of reducers to use is automatically determined based on the size of the data to process.
The total size is computed and then divided by the value of the property num.reducers.bytes.per.reducer, which
defaults to 256 MB. This is the number of reducers that will be used.
The number of reducers can also be set to a fixed value through the property num.reducers.
This type of job is capable of performing its work over multiple iterations.
The number of days to process at a time can be limited by setting the property max.days.to.process,
or by calling
#setMaxToProcess(Integer). The default is 90 days.
This can be useful when there are restrictions on how many tasks
can be used by a single MapReduce job in the cluster. When this property is set, the job will process no more than
this many days at a time, and it will perform one or more iterations if necessary to complete the work.
The number of iterations can be limited by setting the property max.iterations, or by calling
#setMaxIterations(Integer).
If the number of iterations is exceeded the job will fail. By default the maximum number of iterations is 20.
Hadoop configuration may be provided by setting a property with the prefix hadoop-conf..
For example, mapred.min.split.size can be configured by setting property
hadoop-conf.mapred.min.split.size to the desired value.