Supported Platform: Linux® only.
This example shows you how to use the Hadoop Compiler app to create a deployable archive consisting of MATLAB® map and reduce functions and then pass the deployable archive as a payload argument to a job submitted to a Hadoop® cluster.
Goal: Calculate the maximum arrival delay of an airline from the given dataset.
Airline departure and arrival information from 1987-2008.
Start this example by creating a new work folder that is visible to the MATLAB search path.
Before starting MATLAB, at a terminal, set the environment variable
HADOOP_PREFIX to point to the Hadoop installation folder. For example:
|csh / tcsh|
% setenv HADOOP_PREFIX /usr/lib/hadoop
$ export HADOOP_PREFIX=/usr/lib/hadoop
This example uses
/usr/lib/hadoop as directory where Hadoop is installed. Your Hadoop installation directory maybe different.
If you forget setting the
HADOOP_PREFIX environment variable prior to starting MATLAB, set it up using the MATLAB function
setenv at the MATLAB command prompt as soon as you start MATLAB. For example:
Install the MATLAB Runtime in a folder that is accessible by every worker node in the Hadoop cluster. This example uses
/usr/local/MATLAB/MATLAB_Runtime/v99 as the location of the MATLAB Runtime folder.
If you don’t have the MATLAB Runtime, you can download it from the website at:
For information about MATLAB Runtime version numbers corresponding MATLAB releases, see this list.
Copy the map function
/usr/local/MATLAB/R2020b/toolbox/matlab/demos folder to the work folder.
For more information, see Write a Map Function.
Copy the reduce function
folder to the work folder.
For more information, see Write a Reduce Function.
Create the directory
HDFS™ and copy the file
airlinesmall.csv to that directory. Here
refers to your user
name in HDFS.
$ ./hadoop fs -copyFromLocal airlinesmall.csv hdfs://host:54310/user/
Start MATLAB and verify that the
environment variable has been set. At the command prompt,
ans is empty, review the Prerequisites section above to see how you can set the
HADOOP_PREFIX environment variable.
datastore to the file
airlinesmall.csv and save it to a
.mat file. This
is meant to capture the structure of your actual dataset on HDFS.
ds = datastore('airlinesmall.csv','TreatAsMissing','NA',... 'SelectedVariableNames','ArrDelay','ReadSize',1000); save('infoAboutDataset.mat','ds')
In most cases, you will start off by working on a small sample dataset
residing on a local machine that is representative of the actual dataset
on the cluster. This sample dataset has the same structure and variables
as the actual dataset on the cluster. By creating a
datastore object to the dataset residing on your
local machine you are taking a snapshot of that structure. By having
access to this
datastore object, a Hadoop job executing on the cluster will know how to access and
process the actual dataset residing on HDFS.
In this example, the sample dataset (local) and the actual dataset on HDFS are the same.
Launch the Hadoop Compiler app through the MATLAB command line (
>> hadoopCompiler) or
through the apps gallery.
In the Map Function section of the toolstrip,
click the plus button to add mapper file
In the Reduce Function section of the toolstrip,
click the plus button to add reducer file
In the Datastore File section, click the plus
button to add the
infoAboutDataset.mat containing the
In the Output Types section, select
keyvalue as output type. Selecting
keyvalue as your output type means your results
can only be read within MATLAB. If you want your results to be accessible outside of
MATLAB, select output type as
Rename the MapReduce job payload information to
Click Package to build a deployable archive.
The Hadoop Compiler app creates a log file
PackagingLog.txt and two folders
You can use the log file
PackagingLog.txt to see
mcc syntax used to package the deployable
From a Linux shell navigate to the
Incorporate the deployable archive containing MATLAB map and reduce functions into a Hadoop mapreduce job from a Linux shell using the following command:
$ hadoop \ jar /usr/local/MATLAB/MATLAB_Runtime/v99/toolbox/mlhadoop/jar/a2.2.0/mwmapreduce.jar \ com.mathworks.hadoop.MWMapReduceDriver \ -D mw.mcrroot=/usr/local/MATLAB/MATLAB_Runtime/v99 \ maxArrivalDelay.ctf \ hdfs://host:54310/user/
<username>/datasets/airlinesmall.csv \ hdfs://host:54310/user/
Alternately, you can incorporate the deployable archive containing MATLAB map and reduce functions into a Hadoop mapreduce job using the shell script generated by the Hadoop Compiler app. At the Linux shell type the following command:
$ ./run_maxArrivalDelay.sh \ /usr/local/MATLAB/MATLAB_Runtime/v99 \ -D mw.mcrroot=/usr/local/MATLAB/MATLAB_Runtime/v99 \ hdfs://host:54310/user/username/datasets/airlinesmall.csv \ hdfs://host:54310/user/
To examine the results, switch to the MATLAB desktop and create a
datastore to the
results on HDFS. You can then view the results using the
d = datastore('hdfs:///user/<username>/results/part*'); read(d)
ans = Key Value _________________ ______ 'MaxArrivalDelay' 
Other examples of map and reduce functions are available at
toolbox/matlab/demos folder. You can use other examples
to prototype similar deployable archives to run on a Hadoop cluster. For more information, see
Build Effective Algorithms with MapReduce.