Configuration Files

These tables describe the parameters and the values you need to specify before you run the Joiner job.
Note: The description here assumes there are three input files for the joiner job. However, you can have any number of input files for the job.
Table 1. inputFileConfig
Parameter Description
pb.bdq.input.type Input file type. The values can be: TEXT, ORC or PARQUET.
These rows describe the details of the first input file.
pb.bdq.inputfile.path.0 The path where you have placed the input file on HDFS. For example, /home/hduser/input/input0.txt
textinputformat.record.delimiter.0 File record delimiter used in the text type input file. For example, LINUX, MACINTOSH, or WINDOWS
pb.bdq.inputformat.field.delimiter.0 Field or column delimiter used in the input file, such as comma (,) or tab.
pb.bdq.inputformat.text.qualifier.0 Text qualifiers, if any, in the columns or fields of the input file.
pb.bdq.inputformat.file.header.0 Column headers as comma-separated values. For example, business name, id, and domain.
pb.bdq.inputformat.skip.firstrow.0 If the first row is to be skipped from processing. The values can be True or False, where True indicates skip.
These rows describe the details of the second input file.
pb.bdq.inputfile.path.1 The path where you have placed the input file on HDFS. For example, /home/hduser/input/input1.txt
textinputformat.record.delimiter.1 File record delimiter used in the text type input file. For example, LINUX, MACINTOSH, or WINDOWS
pb.bdq.inputformat.field.delimiter.1 Field or column delimiter used in the input file, such as comma (,) or tab.
pb.bdq.inputformat.text.qualifier.1 Text qualifiers, if any, in the columns or fields of the input file.
pb.bdq.inputformat.file.header.1 Column headers as comma-separated values. For example, business name, id, and domain.
pb.bdq.inputformat.skip.firstrow.1 If the first row is to be skipped from processing. The values can be True or False, where True indicates skip.
These rows describe details of the third input file.
pb.bdq.inputfile.path.2 The path where you have placed the input file on HDFS. For example, /home/hduser/input/input2.txt
textinputformat.record.delimiter.2 File record delimiter used in the text type input file. For example, LINUX, MACINTOSH, or WINDOWS
pb.bdq.inputformat.field.delimiter.2 Field or column delimiter used in the input file, such as comma (,) or tab.
pb.bdq.inputformat.text.qualifier.2 Text qualifiers, if any, in the columns or fields of the input file.
pb.bdq.inputformat.file.header.2 Column headers as comma-separated values. For example, business name, id, and domain.
pb.bdq.inputformat.skip.firstrow.2 If the first row is to be skipped from processing. The values can be True or False, where True indicates skip.
Table 2. joinerConfig
Parameter Description
pb.bdq.job.type This is a constant value that defines the job. The value for this job is: Joiner.
pb.bdq.job.name Name of the job. Default is JoinerSample.
com.pb.bdq.dim.join.left.port Json string for defining input File Index of left port.
com.pb.bdq.dim.join.type Specify the type of join operation to be performed. Options are:
  • LeftOuter
  • Full
  • Inner
com.pb.bdq.dim.join.col.0 Specify the columns to be joined, in comma separated format (,).
com.pb.bdq.dim.join.col.1 Specify the columns to be joined, in comma separated format (,).
com.pb.bdq.dim.join.col.2 Specify the columns to be joined, in comma separated format (,).
Table 3. mapReduceConfig
Specifies the MapReduce configuration parameters
Use this file to customize MapReduce parameters, such as mapreduce.map.memory.mb, mapreduce.reduce.memory.mb and mapreduce.map.speculative, as needed for your job.
Table 4. OutputFileConfig
Parameter Description
pb.bdq.output.type Specify if the output is in: TEXT, ORC, or PARQUET format.
pb.bdq.outputfile.path The path where you want the output file to be generated on HDFS. For example, /user/hduser/sampledata/ joiner/output
pb.bdq.outputformat.field.delimiter Field or column delimiter in the output file, such as comma (,) or tab.
pb.bdq.output.overwrite For a true value, the output folder is overwritten every time job is run.
pb.bdq.outputformat.headerfile.create Specify true, if the output file needs to have a header.
Properties of Parquet file  
parquet.compression The compression algorithm used to compress pages. It is one of these: UNCOMPRESSED, SNAPPY, GZIP, or LZO.

Default is UNCOMPRESSED.

parquet.block.size The size of a row group being buffered in memory.

Larger values improve the I/O when reading but consume more memory when writing.

Default size is 134217728 bytes (= 128 * 1024 * 1024)

parquet.page.size Page constitutes block and is the smallest unit that must be read fully to access a single record.
Default size is 1048576 bytes (= 1 * 1024 * 1024)
Note: A very small page size results in deterioration of compression.
parquet.dictionary.page.size Default size is 1048576 bytes (= 1 * 1024 * 1024)
parquet.enable.dictionary The boolean value (True or False) to enable or disable dictionary encoding. Default is True
parquet.validation Default boolean value is False.
parquet.writer.version Specifies the version of writer. It should be PARQUET_1_0 or PARQUET_2_0. Default is PARQUET_1_0.
parquet.writer.max-padding Default to no padding, 0% of the row group size
parquet.page.size.check.estimate Default boolean value is True
parquet.page.size.row.check.min Default is 100
parquet.page.size.row.check.max Default is 10000