Configuration Files

These tables describe the parameters and the values you need to specify before you run the Intraflow Match job.

Table 1. inputFileConfig
Parameter Description
pb.bdq.input.type Input file type. The values can be: file, TEXT, or ORC.
pb.bdq.inputfile.path The path where you have placed the input file on HDFS. For example, /user/hduser/sampledata/intramatch/ input/Intraflow_Input.txt.
textinputformat.record.delimiter File record delimiter used in the text type input file. For example, LINUX, MACINTOSH, or WINDOWS
pb.bdq.inputformat.field.delimiter Field or column delimiter used in the input file, such as comma (,) or tab.
pb.bdq.inputformat.text.qualifier Text qualifiers, if any, in the columns or fields of the input file.
pb.bdq.inputformat.file.header Headers used in the input file. Example: name, firstname, lastname, matchkey, middlename, and recordid.
pb.bdq.inputformat.skip.firstrow If the first row is to be skipped from processing. The values can be True or False, where True indicates skip.
Table 2. intraMatchConfig
Parameter Description
pb.bdq.job.type This is a constant value that defines the job. The value for this job is: IntraMatch.
pb.bdq.job.name Name of the job. Default is IntraMatchSample.
pb.bdq.match.rule Json String for defining match rule. It specifies details, such as match rule hierarchy, matching method, method to score blank data in a field, scoring method, and algorithm to determine if the values in the field name matched.
pb.bdq.match.groupby Name of the column to be used for grouping records in the match queue.
pb.bdq.reduce.count Number of reducers to be run. Default is 1.
pb.bdq.match.express.column Name of the Express Match Column. If the content of this column matches between the suspect and the candidate, no further processing is needed to determine if the suspect and the candidates are duplicates.
pb.bdq.match.keygenerator.json Json string for defining match key generator rule, such as whether to use expressMatchKey, name of the matchKeyField, and algorithm to be used.
Note: This is an optional detail.
pb.bdq.match.unique.collectnumber.zero A true value assigns collection number 0 to unique records.
Table 3. mapReduceConfig
Specifies the MapReduce configuration parameters
Use this file to customize MapReduce parameters, such as mapreduce.map.memory.mb, mapreduce.reduce.memory.mb and mapreduce.map.speculative, as needed for your job.
Table 4. Output File Configuration
Parameter Description
pb.bdq.output.type Specify if the output is in: file, TEXT, or ORC format.
outputfile.path The path where you want the output file to be generated on HDFS. For example, /user/hduser/sampledata/intramatch/output.
pb.bdq.outputformat.field.delimiter Field or column delimiter in the output file, such as comma (,) or tab.
pb.bdq.output.overwrite For a true value, the output folder is overwritten every time job is run.
pb.bdq.outputformat.headerfile.create Specify true, if the output file needs to have a header.
pb.bdq.job.print.counters.console If the counters are printed on console or in a file. True indicates counters are printed on the console
pb.bdq.job.counter.file.path Path and the name of the file to which the counters are to be printed. You need to specify this if value in the pb.bdq.job.print.counters.console is false.