How to implement logistic regression in spark with R using SparklyR package?

Description

To implement logistic regression in spark with R using SparklyR package

Functions used :

spark_connect(master = “local”) – To create a spark connection
sdf_copy_to(spark_connection,R object,name) – To copy data to spark environment
sdf_partition(spark_dataframe,partitions with weights,seed) – To partition spark dataframe into multiple groups
ml_logistic_regression(train_data,formula) – To build a logistic regression model
ml_predict(ml_model,test_data) – To predict the response for the test data
ml_binary_classification_evaluator(predict,label_col,prediction_col) – To evaluate the metrics AUC(Area Under Curve)

  • Load the sparklyr package
  • Create a spark connection
  • Copy data to spark environment
  • Split the data for training and testing
  • Build the logistic regression model
  • Predict using the test data
  • Evaluate the metrics

#Load the sparklyr library
library(sparklyr)
library(caret)
#Create a spark connection
sc #Copy data to spark environment
data_s=sdf_copy_to(sc,read.csv(“/……/weight-height.csv”),”gender”,overwrite= TRUE)
#Split the data for training and testing
partitions=sdf_partition(data_s,training=0.8,test=0.2,seed=111)
train_data=partitions$training
test_data=partitions$test
#Build the logistic regression model
log_model summary(log_model)
#Predict using the test data
prediction = ml_predict(log_model, test_data)
prediction
#Evaluate the metrics AUC
cat(“Area Under Curve : “,ml_binary_classification_evaluator(prediction, label_col = “label”,
prediction_col = “prediction”))

Leave Comment

Your email address will not be published. Required fields are marked *

clear formSubmit