Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

How to implement Frequency pattern mining using Spark with R?

Description

To implement frequency pattern mining using Spark with R

Process
  • Set up Spark Context and Spark session
  • Load the Data set
  • Convert each row as a single transaction
  • Generate the frequency pattern mining model and fit it
  • Find the frequent item sets
  • Generate the association rule
  • Examine the input items against all the association rules and summarize the consequents as prediction
Sapmle Code

#Set up spark home
Sys.setenv(SPARK_HOME=”/…/spark-2.4.0-bin-hadoop2.7″)
.libPaths(c(file.path(Sys.getenv(“SPARK_HOME”), “R”, “lib”), .libPaths()))
#Load the library
library(SparkR)
#Initialize the Spark Context
#To run spark in a local node give master=”local”
sc #Start the SparkSQL Context
sqlContext #Load the data set
data = read.df(“file:///…./GsData.txt”,”csv”,header = “False”, schema = structType(structField(“raw_items”, “string”)), na.strings = “NA”)
showDF(data,truncate=FALSE)
data showDF(data,truncate=FALSE)
model #To get the frequent item sets
frequent_itemsets showDF(frequent_itemsets)
#To get the association rules
association_rules showDF(association_rules)
# Predict on new data
predop=predict(model,data)
showDF(predop,truncate=FALSE)

Screenshots
How to implement Frequency pattern mining using Spark with R
Set up Spark Context and Spark session
Generate the association rule
Start the SparkSQL Context
To get the frequent item sets