(For more resources related to this topic, see here.)
To get ready for this article, we first need to check that our Java environment is configured properly; to do this, check that the JAVA_HOME environment variable is set. Even if all the PDI scripts, when started, call other scripts that try to find out about our Java execution environment to get the values of the JAVA_HOME variable, it is always a good rule of thumb to have that variable set properly anytime we work with a Java application.
The Kitchen script is in the PDI home directory, so the best thing to do to launch the script in the easiest way is to add the path to the PDI home directory to the PATH variable. This gives you the ability to start the Kitchen script from any place without specifying the absolute path to the Kitchen file location. If you do not do this, you will always have to specify the complete path to the Kitchen script file.
To play with this article, we will use the samples in the directory <book_samples>/sample1; here, <book_samples> is the directory where you unpacked all the samples of the article.
How to do it…
For starting a PDI job in Linux or Mac, use the following steps:
- Open the command-line terminal and go to the <book_samples>/sample1 directory.
- Let’s start the sample job. To identify which job file needs to be started by Kitchen, we need to use the –file argument with the following syntax:
Remember to specify either an absolute path or a relative path by properly setting the correct path to the file. The simplest way to start the job is with the following syntax:
$ kitchen.sh –file:./export-job.kjb
- If you’re not positioned locally in the directory where the job files are located, you must specify the complete path to the job file as follows:
$ kitchen.sh –file:/home/sramazzina/tmp/samples/export-job.kjb
- Another option to start our job is to separately specify the name of the directory where the job file is located and then give the name of the job file. To do this, we need to use the –dir argument together with the –file argument. The –dir argument lets you specify the location of the job file directory using the following syntax:
–dir: <complete_path_to_ job_file_directory>
So, if we’re located in the same directory where the job resides, to start the job, we can use the following new syntax:
$ kitchen.sh – dir:. –file:export-job.kjb
- If we’re starting the job from a different directory than the directory where the job resides, we can use the absolute path and the –dir argument to set the job’s directory as follows:
$ kitchen.sh –dir:/home/sramazzina/tmp/samples –file:export-job.kjb
For starting a PDI job with parameters in Linux or Mac, perform the following steps:
- Normally, PDI manages input parameters for the executing job. To set parameters using the command-line script, we need to use a proper argument. We use the –param argument to specify the parameters for the job we are going to launch. The syntax is as follows:
-param: <parameter_name>= <parameter_value>
- Our sample job and transformation does accept a sample parameter called p_country that specifies the name of the country we want to export the customers to a file. Let’s suppose we are positioned in the same directory where the job file resides and we want to call our job to extract all the customers for the country U.S.A. In this case, we can call the Kitchen script using the following syntax:
$ kitchen.sh –param:p_country=USA -file=./export-job.kjb
Of course, you can apply the –param switch to all the other three cases we detailed previously.
For starting a PDI job in Windows, use the following steps:
- In Windows, a PDI job from the filesystem can be started by following the same rules that we saw previously, using the same arguments in the same way. The only difference is in the way we specify the command-line arguments.
- Any time we start the PDI jobs from Windows, we need to specify the arguments using the / character instead of the – character we used for Linux or Mac. Therefore, this means that:
–dir: <complete_path_to_ job_file_directory>
/dir: <complete_path_to_ job_file_directory>
- From the directory <book_samples>/sample1, if you want to start the job, you can run the Kitchen script using the following syntax:
- Regarding the use of PDI parameters in command-line arguments, the second important difference on Windows is that we need to substitute the = character in the parameter assignment syntax with the : character. Therefore, this means that:
–param: <parameter_name>= <parameter_value>
/param: <parameter_name>: <parameter_value>
- From the directory <book_samples>/sample1, if you want to extract all the customers for the country U. S. A, you can start the job using the following syntax:
C:tempsamples>Kitchen.bat /param:p_country:USA /file:./exportjob.
For starting the PDI transformations, perform the following steps:
- The Pan script starts PDI transformations. On Linux or Mac, you can find the pan.sh script in the PDI home directory. Assuming that you are in the same directory, <book_samples>/sample1, where the transformation is located, you can start a simple transformation with a command in the following way:
$ pan.sh –file:./read-customers.ktr
- If you want to start a transformation by specifying some parameters, you can use the following command:
$ pan.sh –param:p_country=USA –file:./read-customers.ktr
- In Windows, you can use the Pan.bat script, and the sample commands will be as follows:
- Again, if you want to start a transformation by specifying some parameters, you can use the following command:
C:tempsamples>Pan.bat /param:p_country=USA /file:./readcustomers.
IIn this article, you were guided through simply starting a PDI job using the script Kitchen. In this case, the PDI job we started were stored locally in the computer filesystem, but it could be anywhere in the network in any place that is directly accessible. You learned how to start simple jobs both with and without a set of input parameters previously defined in the job.
Using command-line scripts was a fast way to start batches, but it was also the easiest way to schedule our jobs using our operating system’s scheduler. The script accepted a set of inline arguments to pass the proper options required by the program to run our job in any specific situation.
Resources for Article :
- Integrating Kettle and the Pentaho Suite [Article]
- Installing Pentaho Data Integration with MySQL [Article]
- Pentaho – Using Formulas in Our Reports [Article]