Using an Intraflow Match Spark Job
-
Create an instance of
AdvanceMatchFactory
, using its static methodgetInstance()
. -
Provide the input and output details for the Intraflow Match job by creating an
instance of
IntraMatchDetail
specifying theProcessType
. The instance must use the type SparkProcessType.-
Specify the column using which the records are to be grouped by
creating an instance of
GroupbyOption
.Use an instance of GroupbySparkOption to specify the group-by column. -
Generate the matching rules for the job by creating an instance of
MatchRule
. -
Create an instance of
IntraMatchDetail
, by passing an instance of typeJobConfig
, theGroupbyOption
instance created, and theMatchRule
instance created above as the arguments to its constructor.TheJobConfig
parameter must be an instance of type SparkJobConfig. -
Set the details of the input file using the
inputPath
field of theIntraMatchDetail
instance.- For a text input file, create an instance of
FilePath
with the relevant details of the input file by invoking the appropriate constructor. - For an ORC input file, create an instance of
OrcFilePath
with the path of the ORC input file as the argument. - For a Parquet input file, create an instance of ParquetFilePath with the path of the Parquet input file as the argument.
- For a text input file, create an instance of
-
Set the details of the output file using the
outputPath
field of theIntraMatchDetail
instance.- For a text output file, create an instance of
FilePath
with the relevant details of the output file by invoking the appropriate constructor. - For an ORC output file, create an instance of
OrcFilePath
with the path of the ORC output file as the argument. - For a Parquet output file, create an instance of ParquetFilePath with the path of the Parquet output file as the argument.
- For a text output file, create an instance of
-
Set the name of the job using the
jobName
field of theIntraMatchDetail
instance. -
Set the Express Match Column using the
expressMatchColumn
field of theIntraMatchDetail
instance, if required. -
Set the flag
collectionNumberZerotoUniqueRecords
of theIntraMatchDetail
instance to true to allocate the collection number 0 (zero) to a unique record. The default is true.If you do not wish to allocate the collection number zero to unique records, set this flag to false. -
Set the
compressOutput
flag of theIntraMatchDetail
instance to true to compress the output of the job. -
If the input data does not have match keys, you must specify the match
key settings to first run the Match Key Generator job to generate the
match keys, before running the Intraflow Match job.
To generate the match keys for the input data, specify the match key settings by creating and configuring an instance of
MatchKeySettings
to generate a match key before performing the intraflow matching. Set this instance using thematchKeySettings
field of theIntraMatchDetail
instance.Note: To see how to set match key settings, see the code samples.
-
Specify the column using which the records are to be grouped by
creating an instance of
-
To create and run the Spark job, use the previously created instance of
AdvanceMatchFactory
to invoke its methodrunSparkJob()
. In this, pass the above instance ofIntraMatchDetail
as an argument.TherunSparkJob()
method runs the job and returns aMap
of the reporting counters of the job. - Display the counters to view the reporting statistics for the job.