So far we have used different map and aggregation functions, on simple and key/value pair RDD's, in order to get simple statistics that help us understand our datasets. In this notebook we will introduce Spark's machine learning library MLlib through its basic statistics functionality in order to better understand our dataset. We will use the reduced 10-percent KDD Cup 1999 datasets through the notebook.
As we did in our first notebook, we will use the reduced dataset (10 percent) provided for the KDD Cup 1999, containing nearly half million network interactions. The file is provided as a Gzip file that we will download locally.
import urllib
f = urllib.urlretrieve ("http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz", "kddcup.data_10_percent.gz")
data_file = "./kddcup.data_10_percent.gz"
raw_data = sc.textFile(data_file)
A local vector is often used as a base type for RDDs in Spark MLlib. A local vector has integer-typed and 0-based indices and double-typed values, stored on a single machine. MLlib supports two types of local vectors: dense and sparse. A dense vector is backed by a double array representing its entry values, while a sparse vector is backed by two parallel arrays: indices and values.
For dense vectors, MLlib uses either Python lists or the NumPy array
type. The later is recommended, so you can simply pass NumPy arrays around.
For sparse vectors, users can construct a SparseVector
object from MLlib or pass SciPy scipy.sparse
column vectors if SciPy is available in their environment. The easiest way to create sparse vectors is to use the factory methods implemented in Vectors
.
Let's represent each network interaction in our dataset as a dense vector. For that we will use the NumPy array
type.
import numpy as np
def parse_interaction(line):
line_split = line.split(",")
# keep just numeric and logical values
symbolic_indexes = [1,2,3,41]
clean_line_split = [item for i,item in enumerate(line_split) if i not in symbolic_indexes]
return np.array([float(x) for x in clean_line_split])
vector_data = raw_data.map(parse_interaction)
Spark's MLlib provides column summary statistics for RDD[Vector]
through the function colStats
available in Statistics
. The method returns an instance of MultivariateStatisticalSummary
, which contains the column-wise max, min, mean, variance, and number of nonzeros, as well as the total count.
from pyspark.mllib.stat import Statistics
from math import sqrt
# Compute column summary statistics.
summary = Statistics.colStats(vector_data)
print "Duration Statistics:"
print " Mean: {}".format(round(summary.mean()[0],3))
print " St. deviation: {}".format(round(sqrt(summary.variance()[0]),3))
print " Max value: {}".format(round(summary.max()[0],3))
print " Min value: {}".format(round(summary.min()[0],3))
print " Total value count: {}".format(summary.count())
print " Number of non-zero values: {}".format(summary.numNonzeros()[0])
The interesting part of summary statistics, in our case, comes from being able to obtain them by the type of network attack or 'label' in our dataset. By doing so we will be able to better characterise our dataset dependent variable in terms of the independent variables range of values.
If we want to do such a thing we could filter our RDD containing labels as keys and vectors as values. For that we just need to adapt our parse_interaction
function to return a tuple with both elements.
def parse_interaction_with_key(line):
line_split = line.split(",")
# keep just numeric and logical values
symbolic_indexes = [1,2,3,41]
clean_line_split = [item for i,item in enumerate(line_split) if i not in symbolic_indexes]
return (line_split[41], np.array([float(x) for x in clean_line_split]))
label_vector_data = raw_data.map(parse_interaction_with_key)
The next step is not very sophisticated. We use filter
on the RDD to leave out other labels but the one we want to gather statistics from.
normal_label_data = label_vector_data.filter(lambda x: x[0]=="normal.")
Now we can use the new RDD to call colStats
on the values.
normal_summary = Statistics.colStats(normal_label_data.values())
And collect the results as we did before.
print "Duration Statistics for label: {}".format("normal")
print " Mean: {}".format(normal_summary.mean()[0],3)
print " St. deviation: {}".format(round(sqrt(normal_summary.variance()[0]),3))
print " Max value: {}".format(round(normal_summary.max()[0],3))
print " Min value: {}".format(round(normal_summary.min()[0],3))
print " Total value count: {}".format(normal_summary.count())
print " Number of non-zero values: {}".format(normal_summary.numNonzeros()[0])
Instead of working with a key/value pair we could have just filter our raw data split using the label in column 41. Then we can parse the results as we did before. This will work as well. However having our data organised as key/value pairs will open the door to better manipulations. Since values()
is a transformation on an RDD, and not an action, we don't perform any computation until we call colStats
anyway.
But lets wrap this within a function so we can reuse it with any label.
def summary_by_label(raw_data, label):
label_vector_data = raw_data.map(parse_interaction_with_key).filter(lambda x: x[0]==label)
return Statistics.colStats(label_vector_data.values())
Let's give it a try with the "normal." label again.
normal_sum = summary_by_label(raw_data, "normal.")
print "Duration Statistics for label: {}".format("normal")
print " Mean: {}".format(normal_sum.mean()[0],3)
print " St. deviation: {}".format(round(sqrt(normal_sum.variance()[0]),3))
print " Max value: {}".format(round(normal_sum.max()[0],3))
print " Min value: {}".format(round(normal_sum.min()[0],3))
print " Total value count: {}".format(normal_sum.count())
print " Number of non-zero values: {}".format(normal_sum.numNonzeros()[0])
Let's try now with some network attack. We have all of them listed here.
guess_passwd_summary = summary_by_label(raw_data, "guess_passwd.")
print "Duration Statistics for label: {}".format("guess_password")
print " Mean: {}".format(guess_passwd_summary.mean()[0],3)
print " St. deviation: {}".format(round(sqrt(guess_passwd_summary.variance()[0]),3))
print " Max value: {}".format(round(guess_passwd_summary.max()[0],3))
print " Min value: {}".format(round(guess_passwd_summary.min()[0],3))
print " Total value count: {}".format(guess_passwd_summary.count())
print " Number of non-zero values: {}".format(guess_passwd_summary.numNonzeros()[0])
We can see that this type of attack is shorter in duration than a normal interaction. We could build a table with duration statistics for each type of interaction in our dataset. First we need to get a list of labels as described in the first line here.
label_list = ["back.","buffer_overflow.","ftp_write.","guess_passwd.",
"imap.","ipsweep.","land.","loadmodule.","multihop.",
"neptune.","nmap.","normal.","perl.","phf.","pod.","portsweep.",
"rootkit.","satan.","smurf.","spy.","teardrop.","warezclient.",
"warezmaster."]
Then we get a list of statistics for each label.
stats_by_label = [(label, summary_by_label(raw_data, label)) for label in label_list]
Now we get the duration column, first in our dataset (i.e. index 0).
duration_by_label = [
(stat[0], np.array([float(stat[1].mean()[0]), float(sqrt(stat[1].variance()[0])), float(stat[1].min()[0]), float(stat[1].max()[0]), int(stat[1].count())]))
for stat in stats_by_label]
That we can put into a Pandas data frame.
import pandas as pd
pd.set_option('display.max_columns', 50)
stats_by_label_df = pd.DataFrame.from_items(duration_by_label, columns=["Mean", "Std Dev", "Min", "Max", "Count"], orient='index')
And print it.
print "Duration statistics, by label"
stats_by_label_df
In order to reuse this code and get a dataframe from any variable in our dataset we will define a function.
def get_variable_stats_df(stats_by_label, column_i):
column_stats_by_label = [
(stat[0], np.array([float(stat[1].mean()[column_i]), float(sqrt(stat[1].variance()[column_i])), float(stat[1].min()[column_i]), float(stat[1].max()[column_i]), int(stat[1].count())]))
for stat in stats_by_label
]
return pd.DataFrame.from_items(column_stats_by_label, columns=["Mean", "Std Dev", "Min", "Max", "Count"], orient='index')
Let's try for duration again.
get_variable_stats_df(stats_by_label,0)
Now for the next numeric column in the dataset, src_bytes.
print "src_bytes statistics, by label"
get_variable_stats_df(stats_by_label,1)
And so on. By reusing the summary_by_label
and get_variable_stats_df
functions we can perform some exploratory data analysis in large datasets with Spark.
Spark's MLlib supports Pearson’s and Spearman’s to calculate pairwise correlation methods among many series. Both of them are provided by the corr
method in the Statistics
package.
We have two options as input. Either two RDD[Double]
s or an RDD[Vector]
. In the first case the output will be a Double
value, while in the second a whole correlation Matrix. Due to the nature of our data, we will obtain the second.
from pyspark.mllib.stat import Statistics
correlation_matrix = Statistics.corr(vector_data, method="spearman")
Once we have the correlations ready, we can start inspecting their values.
import pandas as pd
pd.set_option('display.max_columns', 50)
col_names = ["duration","src_bytes","dst_bytes","land","wrong_fragment",
"urgent","hot","num_failed_logins","logged_in","num_compromised",
"root_shell","su_attempted","num_root","num_file_creations",
"num_shells","num_access_files","num_outbound_cmds",
"is_hot_login","is_guest_login","count","srv_count","serror_rate",
"srv_serror_rate","rerror_rate","srv_rerror_rate","same_srv_rate",
"diff_srv_rate","srv_diff_host_rate","dst_host_count","dst_host_srv_count",
"dst_host_same_srv_rate","dst_host_diff_srv_rate","dst_host_same_src_port_rate",
"dst_host_srv_diff_host_rate","dst_host_serror_rate","dst_host_srv_serror_rate",
"dst_host_rerror_rate","dst_host_srv_rerror_rate"]
corr_df = pd.DataFrame(correlation_matrix, index=col_names, columns=col_names)
corr_df
We have used a Pandas DataFrame
here to render the correlation matrix in a more comprehensive way. Now we want those variables that are highly correlated. For that we do a bit of dataframe manipulation.
# get a boolean dataframe where true means that a pair of variables is highly correlated
highly_correlated_df = (abs(corr_df) > .8) & (corr_df < 1.0)
# get the names of the variables so we can use them to slice the dataframe
correlated_vars_index = (highly_correlated_df==True).any()
correlated_var_names = correlated_vars_index[correlated_vars_index==True].index
# slice it
highly_correlated_df.loc[correlated_var_names,correlated_var_names]
The previous dataframe showed us which variables are highly correlated. We have kept just those variables with at least one strong correlation. We can use as we please, but a good way could be to do some model selection. That is, if we have a group of variables that are highly correlated, we can keep just one of them to represent the group under the assumption that they convey similar information as predictors. Reducing the number of variables will not improve our model accuracy, but it will make it easier to understand and also more efficient to compute.
For example, from the description of the KDD Cup 99 task we know that the variable dst_host_same_src_port_rate
references the percentage of the last 100 connections to the same port, for the same destination host. In our correlation matrix (and auxiliar dataframes) we find that this one is highly and positively correlated to src_bytes
and srv_count
. The former is the number of bytes sent form source to destination. The later is the number of connections to the same service as the current connection in the past 2 seconds. We might decide not to include dst_host_same_src_port_rate
in our model if we include the other two, as a way to reduce the number of variables and later one better interpret our models.
Later on, in those notebooks dedicated to build predictive models, we will make use of this information to build more interpretable models.