Using a Duplicate Synchronization Spark Job

  1. Create an instance of AdvanceMatchFactory, using its static method getInstance().
  2. Provide the input and output details for the Duplicate Synchronization job by creating an instance of DuplicateSyncDetail specifying the ProcessType. The instance must use the type SparkProcessType.
    1. Specify the column using which the records are to be grouped by creating an instance of GroupbyOption.
      Use an instance of GroupbySparkOption to specify the group-by column.
    2. Generate the consolidation conditions for the job by creating an instance of DuplicateSynchronizationConfiguration. Within this instance, define the consolidation conditions using instances of ConsolidationCondition, and connecting the conditions using logical operators.
      Each instance of ConsolidationCondition is defined using a ConsolidationRule instance and its corresponding ConsolidationAction instance.
      Note: Each instance of ConsolidationRule can be defined either using a single instance of SimpleRule, or using a hierarchy of child SimpleRule instances and nested ConjoinedRule instances joined using logical operators. See Enum JoinType and Enum Operation.
    3. Create an instance of DuplicateSyncDetail, by passing an instance of type JobConfig, the GroupbyOption instance created, and the DuplicateSynchronizationConfiguration instance created above as the arguments to its constructor.
      The JobConfig parameter must be an instance of type SparkJobConfig.
    4. Set the details of the input file using the inputPath field of the DuplicateSyncDetail instance.
      • For a text input file, create an instance of FilePath with the relevant details of the input file by invoking the appropriate constructor.
      • For an ORC input file, create an instance of OrcFilePath with the path of the ORC input file as the argument.
      • For a Parquet input file, create an instance of ParquetFilePath with the path of the Parquet input file as the argument.
    5. Set the details of the output file using the outputPath field of the DuplicateSyncDetail instance.
      • For a text output file, create an instance of FilePath with the relevant details of the output file by invoking the appropriate constructor.
      • For an ORC output file, create an instance of OrcFilePath with the path of the ORC output file as the argument.
      • For a Parquet output file, create an instance of ParquetFilePath with the path of the Parquet output file as the argument.
    6. Set the name of the job using the jobName field of the DuplicateSyncDetail instance.
    7. Set the compressOutput flag of the DuplicateSyncDetail instance to true to compress the output of the job.
  3. To create and run the Spark job, use the previously created instance of AdvanceMatchFactory to invoke its method runSparkJob(). In this, pass the above instance of DuplicateSyncDetail as an argument.
    The runSparkJob() method runs the job and returns a Map of the reporting counters of the job.
  4. Display the counters to view the reporting statistics for the job.