ML.NET Recommendation Engine: Pitfall of One-Class Matrix Factorization

During the weekends I decided to take a look at what ML.NET can propose in the area of recommendation engine.

I found a nice picture in Mark Farragher’s blog post that explains three available options:

The choice depends on what information you have.

  • If you have sophisticated user feedback like rating (or likes and most importantly dislikes) then we can use Matrix Factorization algorithm to estimate unknown ratings.
  • If we have not only rating but other product fields, we can use more advanced algorithm called “Field-Aware Factorization Machine”
  • If we have no rating at all then “One Class Matrix Factorization” is the only option for us.

In this post I would like to focus on the last option.

One-Class Matrix Factorization

This algorithm can be used when data is limited. For example:

  • Books store: We have history of purchases (list of pairs userId + bookId) without user’s feedback and want to recommend new books for existing users.
  • Amazon store: We have history of co-purchases (list of pairs productId + productId) and want to recommend products in section “Customers Who Bought This Item Also Bought”.
  • Social network: We have information about user friendship (list of pairs userId + userId) and want to recommend users in section “People You May Know”.

As you already understood, it is applicable for a pair of 2 categorical variables, not only for userId + productId pairs.

Google showed several relevant posts about the usage of ML.NET One Class Matrix Factorizarion:

After reading all these 3 samples I realised that I do not fully understand what is Label column is used for. Later I came to a conclusion that all three samples most likely are incorrect and here is why.

Mathematical details

Let’s take a look at excellent documentation of MatrixFactorizationTrainer class. The first gem is

There are three input columns required, one for matrix row indexes, one for matrix column indexes, and one for values (i.e., labels) in matrix. They together define a matrix in COO format. The type for label column is a vector of Single (float) while the other two columns are key type scalar.

COO stores a list of (row, column, value) tuples. Ideally, the entries are sorted first by row index and then by column index, to improve random access times. This is another format that is good for incremental matrix construction

So anyway we need three columns. If in the classic Matrix Factorization the Label column is the rating, then for One-Class Matrix Factorization we need to fill it with something else.

The second gem is

The coordinate descent method included is specifically for one-class matrix factorization where all observed ratings are positive signals (that is, all rating values are 1). Notice that the only way to invoke one-class matrix factorization is to assign one-class squared loss to loss function when calling MatrixFactorization(Options). See Page 6 and Page 28 here for a brief introduction to standard matrix factorization and one-class matrix factorization. The default setting induces standard matrix factorization. The underlying library used in ML.NET matrix factorization can be found on a Github repository.

Here is Page 28 from references presentation:

As you see, Label is expected to be always 1, because we watched only One Class (positive rating): user downloaded a book, user purchased 2 items together, there is a friendship between two users.

In the case when data set does not provide rating to us, it is our responsibility to provide 1s to MatrixFactorizationTrainer and specify MatrixFactorizationTrainer.LossFunctionType as loss function.

Here you can find fixes for samples:

Google Cloud Vision API from .NET\F# (OAuth2 with ServiceAccount.json)

Google Cloud Platform provides a wide range of APIs, one of which is Cloud Vision API that allows you to detect faces in images, extract sentiments, detect landmark, OCR and etc.

One of available annotators is “Logo Detection” that allows you to find company logo in your image and recognize it.

.NET is not the part of mainstream Google Cloud SDK. Google maintains google-api-dotnet-client that should allow you to authenticate to and call all available services. API design looks slightly not intuitive for .NET world (at least from my point of view).

I spent some time on Google/SO/Github trying to understand how to use OAuth2 in server-to-server authentication scenario with ServiceAccount.json file generated by Google API Manager.

s2s

You cannot use this API without billing account, so you have to put your credit card info, if you want to play with this API.

Also, note that you need to have two NuGet packages Google.Apis.Vision.v1Google.Apis.Oauth2.v2 (and a lot of their dependencies)

So, here is the full sample:

#I __SOURCE_DIRECTORY__
#load "Scripts/load-references-debug.fsx"

open System.IO
open Google.Apis.Auth.OAuth2
open Google.Apis.Services
open Google.Apis.Vision.v1
open Google.Apis.Vision.v1.Data

// OAuth2 authentication using service account JSON file
let credentials =
    let jsonServiceAccount = @"d:\ServiceAccount.json"
    use stream = new FileStream(jsonServiceAccount, 
                         FileMode.Open, FileAccess.Read)
    GoogleCredential.FromStream(stream)
        .CreateScoped(VisionService.Scope.CloudPlatform)

let visionService = // Google Cloud Vision Service
    BaseClientService.Initializer(
        ApplicationName = "my-cool-app",
        HttpClientInitializer = credentials)
    |> VisionService

// Logo detection request for one image
let createRequest content = 
  let wrap (xs:'a list) = System.Collections.Generic.List(xs)
  BatchAnnotateImagesRequest(
    Requests = wrap
      [AnnotateImageRequest(
        Features = wrap [Feature(Type = "LOGO_DETECTION")],
        Image = Image(Content = content))
      ])
  |> visionService.Images.Annotate


let call fileName = // Call and interpret results
    let request =
        File.ReadAllBytes fileName
        |> System.Convert.ToBase64String
        |> createRequest
    let response = request.Execute()

    [ for x in response.Responses do
        for y in x.LogoAnnotations do
          yield y.Description
    ] |> List.toArray


let x = call "D:\\fsharp256.png"
// val x : string [] = [|"F#"|]

R for the .NET Developer

Jamie Dixon's Home

clip_image002clip_image004clip_image006clip_image008

clip_image010imageimageimageimage

clip_image012

clip_image014

1setwd("C:GitR4DotNet") 2 3#y = x1 + x2 + x3 + E 4#y is what you are trying explain 5#x1, x2, x3 are the variables that cause/influence y 6#E is things that we are not measuring/ using for calculations 7 8fuel.efficiency <- read.csv("C:/Git/R4DotNet/Data/FuelEfficiency.csv") 9summary(fuel.efficiency) 1011#MPG = Miles per gallon12#GPM = Gallons per 100 miles13#WT = Weight of car in 1000 lbs14#DIS = Displacment in cubic inches15#NC = number of cylinders16#HP = Horsepower17#ACC = Acceleration in seconds from 0-6018#ET = Engine Type 0 = V, 1 = Straight1920plot(GPM~WT,data=fuel.efficiency) 21plot(GPM~DIS,data=fuel.efficiency) 2223fuel.efficiency$NC <- factor(fuel.efficiency$NC) 24fuel.efficiency$ET <-

View original post 333 more words

R-Fiddle: An online playground for R code

r-fiddle_logowww.R-fiddle.org is an early stage beta that provides you with a free and powerful environment to write, run and share R-code right inside your browser. It even offers the option to include packages. Since a couple of days it’s gaining more and more traction, and was mentioned on the frontpage of Hacker News.

We designed it for those situations where you have code that you need to prototype quickly and then possibly share it with others for feedback. All this without needing a user account, or any scrap projects or files! We even included a very-easy-to-use ’embed’ function for blogs and website, so your visitors can edit and run R code on your own website or blog. This is the first version of R-fiddle, so do not hesitate to give us feedback.

Working together with the help of R-fiddle

You can use R-fiddle to share code snippets with colleagues…

View original post 526 more words

F# Neural Networks with FsLab

nn_previewNeural networks are very powerful tool and at the same time, it is not easy to use all its power. Now we are one step closer to it from F# and .NET. We will delegate model training to R using R Provider. Also we will use Deedle (that was announced some days ago) for handy data manipulation.

Prerequisites:

Learning from Data:

First of all, we need to load required assemblies into our FSI session. It is pretty easy with FsLab because package have bootstrapping script.

#load "..\packages\FsLab.0.1.4\FsLab.fsx"

The next step is to download and install missed R packages. For this demo, we need neuralnet for training neural network model and prediction, caret for data visualization.

open RProvider.utils
R.install_packages("MASS")
R.install_packages("pbkrtest")
R.install_packages("lattice")
R.install_packages("Matrix")
R.install_packages("mgcv")
R.install_packages("grid")
R.install_packages("neuralnet")
R.install_packages("caret")
R.install_packages("zoo")

Now we are ready to start work. We need to open namespaces and load a data set. For this demo, we have chosen iris data set, which is classic for lots of demos.

open Deedle
open RDotNet
open RProvider
open RProvider.``base``
open RProvider.datasets
open RProvider.neuralnet
open RProvider.caret

let iris : Frame<int, string> = R.iris.GetValue()

To better understand what we are going to do, let’s plot this data set. First of all, split data into two parts: features (Sepal.Length; Sepal.Width; Petal.Length; Petal.Width) and a target variable (Species). After that plot these data into different dimensions (different colors represent different Species).

let features =
    iris
    |> Frame.filterCols (fun c _ -> c <> "Species")
    |> Frame.mapColValues (fun c -> c.As<double>())
let targets =
    R.as_factor(iris.Columns.["Species"])

R.featurePlot(x = features, y = targets, plot = "pairs")

nn_features

As you see, our task is not trivial – we have 3 classes instead of 2 (that is not classic situation) and classes are not clearly separable. Nevertheless let’s try!  First of all, we need to split our data into 2 parts – training and testing data sets (70% vs 30%). The first part will be sent to the neural network for learning, the second one will be used for measuring model quality. Also let’s shuffle data to be honest.

iris.ReplaceColumn("Species", targets.AsNumeric())
let range = [1..iris.RowCount]
let trainingIdxs : int[] = R.sample(range, iris.RowCount*7/10).GetValue()
let testingIdxs : int[] = R.setdiff(range, trainingIdxs).GetValue()
let trainingSet = iris.Rows.[trainingIdxs]
let testingSet = iris.Rows.[testingIdxs]

Now we are ready to train a neural network, all we need is to provide a formula (specify what is the input for our model and what is the output) “Species ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width”, provide a data set and specify the structure of hidden layers. In the following example, we will train the network with two layers of hidden nodes, the first layer with 3 nodes and the second layer with 2 nodes.

let nn = 
    R.neuralnet(
        "Species ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width", 
        data = trainingSet, hidden = R.c(3,2), 
        err_fct = "ce", linear_output = true)

// Plot the resulting neural network with coefficients
R.eval(R.parse(text="library(grid)"))
R.plot_nn nn

nn_network

Cool! How simple it is. To be able to measure quality of the classification we need to split our training set into features and targets.

let testingFeatures = 
    testingSet
    |> Frame.filterCols (fun c _ -> c <> "Species") 
    |> Frame.mapColValues (fun c -> c.As<double>())
let testingTargets = 
    testingSet.Columns.["Species"].As<int>().Values

To execute the neural network on the new data (apply our classification) we should call R.compute method and pass the training data set there.

let prediction = 
    R.compute(nn, testingFeatures)
     .AsList().["net.result"].AsVector() 
    |> Seq.cast<double>
    |> Seq.map (round >> int))

Finally, let’s compare prediction results with testing values:

let misclassified = 
    Seq.zip prediction testingTargets
    |> Seq.filter (fun (a,b) -> a<>b)
    |> Seq.length

printfn "Misclassified irises '%d' of '%d'" misclassified (testingSet.RowCount)

If you execute all these steps one by one, you will see that there are only ~3 misclassifies of 45 samples. Pretty well quality.

Full script:

#load "..\packages\FsLab.0.1.4\FsLab.fsx"

// You need to install 'nnet' and 'caret' packages if you do not have them
open RProvider.utils
open RProvider.utils
R.install_packages("MASS")
R.install_packages("pbkrtest")
R.install_packages("lattice")
R.install_packages("Matrix")
R.install_packages("mgcv")
R.install_packages("grid")
R.install_packages("neuralnet")
R.install_packages("caret")
R.install_packages("zoo")

open Deedle
open RDotNet
open RProvider
open RProvider.``base``
open RProvider.datasets
open RProvider.neuralnet
open RProvider.caret

// Load data from R to Deedle frame
let iris : Frame<int, string> = R.iris.GetValue()

// Observe iris data set
let features =
    iris
    |> Frame.filterCols (fun c _ -> c <> "Species")
    |> Frame.mapColValues (fun c -> c.As<double>())
let targets =
    R.as_factor(iris.Columns.["Species"])

R.featurePlot(x = features, y = targets, plot = "pairs")

iris.ReplaceColumn("Species", targets.AsNumeric())
// Split data to training and testing sets (70% vs 30%)
let range = [1..iris.RowCount]
let trainingIdxs : int[] = R.sample(range, iris.RowCount*7/10).GetValue()
let testingIdxs : int[] = R.setdiff(range, trainingIdxs).GetValue()
let trainingSet = iris.Rows.[trainingIdxs]
let testingSet = iris.Rows.[testingIdxs]

// Train neural network
let nn = 
    R.neuralnet(
        "Species ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width", 
        data = trainingSet, hidden = R.c(3,2), 
        err_fct = "ce", linear_output = true)

// Plot the resulting neural network with coefficients
R.eval(R.parse(text="library(grid)"))
R.plot_nn nn

// Split testing set into features and targets
let testingFeatures = 
    testingSet
    |> Frame.filterCols (fun c _ -> c <> "Species") 
    |> Frame.mapColValues (fun c -> c.As<double>())
let testingTargets = 
    testingSet.Columns.["Species"].As<int>().Values

// Predict `Species` for testingFeatures with neural network
let prediction = 
    R.compute(nn, testingFeatures)
     .AsList().["net.result"].AsVector() 
    |> Seq.cast<double>
    |> Seq.map (round >> int))

// Calculate number of misclassified irises
let misclassified = 
    Seq.zip prediction testingTargets
    |> Seq.filter (fun (a,b) -> a<>b)
    |> Seq.length

printfn "Misclassified irises '%d' of '%d'" misclassified (testingSet.RowCount)

P.S.

Notice, if you have problems with bootstrapping RProvider and/or converting R data frame to Deedle data frames – you need to verify that during installation of NuGet packages, all assemblies have been copied to RProvider’s lib sub-folder (see in the following picture).

deedle_rprovider

Rattle for F# devs

The strange thing happens, Rattle is an awesome tool but it is not so well known for devs as it should be. We definitely need to fix this.

Rattle (the R Analytical Tool To Learn Easily) presents statistical and visual summaries of data, transforms data into forms that can be readily modelled, builds both unsupervised and supervised models from the data, presents the performance of models graphically, and scores new datasets.

At first, we need to install new package from CRAN. To do so, just open R console and type the following:

install.packages("rattle")

Here, you need to check that you have RProvider installed.

Install-Package RProvider

Now we are ready to start.

#I @"..\packages\RProvider.1.0.0\lib"
#r "RDotNet.dll"
#r "RProvider.dll"

open RProvider.rattle
R.rattle() |> ignore

Execute this short snippet and you should see Rattle start screen similar to the following:rattle_start You are ready to study your data without a single line of code.

Load you data from wide range of sources:

rattle_load

Explore your data using strongest statistic technics:

rattle_explore

Test the nature of your data:

rattle_test

Transform your data:

rattle_transform

Cluster your data:

rattle_cluster

Identify relationships or affinities:

rattle_associate

Experiment with different models on your data, before implementing any of them in your favorite language:

rattle_model

Evaluate quality of your model:

rattle_evaluate

Learn your data!

Upd: If you are interested in it, then I can recommend the following book.