Using a Table Lookup Spark Job
-
Create an instance of
DataNormalizationFactory
, using its static methodgetInstance()
. -
Provide the input and output details for the Table Lookup job by creating an
instance of
TableLookupDetail
specifying theProcessType
. The instance must use the type SparkProcessType.-
Configure the table lookup rules by creating an instance of
TableLookupConfiguration
.Within this instance, add an instance of typeAbstractTableLookupRule
. ThisAbstractTableLookupRule
instance must be defined using one of these classes:Standardize
,Categorize
orIdentify
, corresponding to the desired table lookup rule category. -
Set the details of the Reference Data path and location type by
creating an instance of
ReferenceDataPath
. See Enum ReferenceDataPathLocation. -
Create an instance of
TableLookupDetail
, by passing an instance of typeJobConfig
, and theTableLookupConfiguration
andReferenceDataPath
instances created earlier as the arguments to its constructor.TheJobConfig
parameter must be an instance of type SparkJobConfig. -
Set the details of the input file using the
inputPath
field of theTableLookupDetail
instance.- For a text input file, create an instance of
FilePath
with the relevant details of the input file by invoking the appropriate constructor. - For an ORC input file, create an instance of
OrcFilePath
with the path of the ORC input file as the argument. - For a Parquet input file, create an instance of ParquetFilePath with the path of the Parquet input file as the argument.
- For a text input file, create an instance of
-
Set the details of the output file using the
outputPath
field of theTableLookupDetail
instance.- For a text output file, create an instance of
FilePath
with the relevant details of the output file by invoking the appropriate constructor. - For an ORC output file, create an instance of
OrcFilePath
with the path of the ORC output file as the argument. - For a Parquet output file, create an instance of ParquetFilePath with the path of the Parquet output file as the argument.
- For a text output file, create an instance of
-
Set the name of the job using the
jobName
field of theTableLookupDetail
instance. -
Set the
compressOutput
flag of theTableLookupDetail
instance to true to compress the output of the job.
-
Configure the table lookup rules by creating an instance of
-
To create and run the Spark job, use the previously created instance of
DataNormalizationFactory
to invoke its methodrunSparkJob()
. In this, pass the above instance ofTableLookupDetail
as an argument.TherunSparkJob()
method runs the job and returns aMap
of the reporting counters of the job. - Display the counters to view the reporting statistics for the job.