Using a Filter Spark Job
-
Create an instance of
AdvanceMatchFactory
, using its static methodgetInstance()
. -
Provide the input and output details for the Filter job by creating an instance
of
FilterDetail
specifying theProcessType
. The instance must use the type SparkProcessType.-
Specify the column using which the records are to be grouped by
creating an instance of
GroupbyOption
.Use an instance of GroupbySparkOption to specify the group-by column. -
Generate the consolidation rules for the job by creating an instance of
FilterConfiguration
. Within this instance, define the consolidation conditions using instances ofConsolidationCondition
, and connecting the conditions using logical operators.Each instance ofConsolidationCondition
is defined using aConsolidationRule
instance and its correspondingConsolidationAction
instance.Note: Each instance ofConsolidationRule
can be defined either using a single instance ofSimpleRule
, or using a hierarchy of childSimpleRule
instances and nestedConjoinedRule
instances joined using logical operators. See Enum JoinType and Enum Operation. -
Create an instance of
FilterDetail
, by passing an instance of typeJobConfig
, theGroupbyOption
instance created, and theFilterConfiguration
instance created above as the arguments to its constructor.TheJobConfig
parameter must be an instance of type SparkJobConfig. -
Set the details of the input file using the
inputPath
field of theFilterDetail
instance.For a text input file, create an instance ofFilePath
with the relevant details of the input file by invoking the appropriate constructor. For an ORC input file, create an instance ofOrcFilePath
with the path of the ORC input file as the argument. -
Set the details of the output file using the
outputPath
field of theFilterDetail
instance.For a text output file, create an instance ofFilePath
with the relevant details of the output file by invoking the appropriate constructor. For an ORC output file, create an instance ofOrcFilePath
with the path of the ORC output file as the argument. -
Set the name of the job using the
jobName
field of theFilterDetail
instance. -
Set the
compressOutput
flag of theFilterDetail
instance to true to compress the output of the job.
-
Specify the column using which the records are to be grouped by
creating an instance of
-
To create and run the Spark job, use the previously created instance of
AdvanceMatchFactory
to invoke its methodrunSparkJob()
. In this, pass the above instance ofFilterDetail
as an argument.TherunSparkJob()
method runs the job and returns aMap
of the reporting counters of the job. - Display the counters to view the reporting statistics for the job.