NLP: Stanford Named Entity Recognizer with F# (.NET)

Update (2014, January 3): Links and/or samples in this post might be outdated. The latest version of samples are available on new Stanford.NLP.NET site.

All code samples from this post are available on GitHub.

Samples for one more Stanford NLP library were ported to .NET. It is Stanford Named Entity Recognizer (NER).

To compile stanford-ner.jar to .NET assembly you need to follow the steps from my post “NLP: Stanford Parser with F# (.NET)“. Also you can download already compiled version from GitHub.

What is Stanford Named Entity Recognizer (NER)?nlp-logo-navbar

Stanford NER (also known as CRFClassifier) is a Java implementation of a Named Entity Recognizer. Named Entity Recognition (NER) labels sequences of words in a text which are the names of things, such as person and company names, or gene and protein names. The software provides a general (arbitrary order) implementation of linear chain Conditional Random Field (CRF) sequence models, coupled with well-engineered feature extractors for Named Entity Recognition. (CRF models were pioneered by Lafferty, McCallum, and Pereira (2001); see Sutton and McCallum (2006) for a better introduction.) Included with the download are good 3 class (PERSON, ORGANIZATION, LOCATION) named entity recognizers for English (in versions with and without additional distributional similarity features) and another pair of models trained on the CoNLL 2003 English training data. The distributional similarity features improve performance but the models require considerably more memory.

Read more about Named-entity recognition on Wikipedia.

Let’s play!

So, again, code is pretty straightforward and easy to read and understand. It looks procedural with some extra noise of type casting because of Java runtime nature.

open edu.stanford.nlp.ling

open java.util
open System.IO
open IKVM.FSharp

let main file =
    let classifier =
    match file with
    | Some(fileName) ->
        let fileContents = File.ReadAllText(fileName)
        |> Collections.toSeq
        |> Seq.cast<java.util.List>
        |> Seq.iter (fun sentence ->
            |> Collections.toSeq
            |> Seq.cast<CoreLabel>
            |> Seq.iter (fun word ->
                printf "%s/%O "
            printfn ""
    | None ->
        let s1 = "Good afternoon Rajat Raina, how are you today?"
        let s2 = "I go to school at Stanford University, which is located in California."
        printfn "%s\n" (classifier.classifyToString(s1))
        printfn "%s\n" (classifier.classifyWithInlineXML(s2))
        printfn "%s\n" (classifier.classifyToString(s2, "xml", true));
        |> Collections.toSeq
        |> Seq.iteri (fun i coreLabel ->
            printfn "%d\n:%O\n" i coreLabel

Let’s test NER on the text from Don Syme wiki page =).

Don Syme is an Australian computer scientist and a Principal Researcher at Microsoft Research, Cambridge, U.K. He is the designer and architect of the F# programming language, described by a reporter as being regarded as “the most original new face in computer languages since Bjarne Stroustrup developed C++ in the early 1980s.

Earlier, Syme created generics in the .NET Common Language Runtime, including the initial design of generics for the C# programming language, along with others including Andrew Kennedy and later Anders Hejlsberg. Kennedy, Syme and Yu also formalized this widely used system.

He holds a Ph.D. from the University of Cambridge, and is a member of the WG2.8 working group on functional programming. He is a co-author of the book Expert F# 2.0.

In the past he also worked on formal specification, interactive proof, automated verification and proof description languages.

Named-entity recognition result:

Don/PERSON Syme/PERSON is/O an/O Australian/O computer/O scientist/O and/O a/O Principal/O Researcher/O at/O Microsoft/ORGANIZATION Research/ORGANIZATION ,/O Cambridge/LOCATION ,/O U.K./LOCATION ./O He/O is/O the/O designer/O and/O architect/O of/O the/O F/O #/O programming/O language/O ,/O described/O by/O a/O reporter/O as/O being/O regarded/O as/O “/O the/O most/O original/O new/O face/O in/O computer/O languages/O since/O Bjarne/PERSON Stroustrup/PERSON developed/O C/O +/O +/O in/O the/O early/O 1980s/O ./O

Earlier/O ,/O Syme/PERSON created/O generics/O in/O the/O ./O NET/O Common/O Language/O Runtime/O ,/O including/O the/O initial/O design/O of/O generics/O for/O the/O C/O #/O programming/O language/O ,/O along/O with/O others/O including/O Andrew/PERSON Kennedy/PERSON and/O later/O Anders/PERSON Hejlsberg/PERSON ./O Kennedy/PERSON ,/O Syme/PERSON and/O Yu/PERSON also/O formalized/O this/O widely/O used/O system/O ./O

He/O holds/O a/O Ph.D./O from/O the/O University/ORGANIZATION of/ORGANIZATION Cambridge/ORGANIZATION ,/O and/O is/O a/O member/O of/O the/O WG2/O .8/O working/O group/O on/O functional/O programming/O ./O He/O is/O a/O co-author/O of/O the/O book/O Expert/O F/O #/O 2.0/O ./O

In/O the/O past/O he/O also/O worked/O on/O formal/O specification/O ,/O interactive/O proof/O ,/O automated/O verification/O and/O proof/O description/O languages/O ./O

NLP: Stanford POS Tagger with F# (.NET)

Update (2014, January 3): Links and/or samples in this post might be outdated. The latest version of samples are available on new Stanford.NLP.NET site.

All code samples from this post are available on GitHub.

Continuing the theme of porting Stanford NLP libraries to .NET, I am glad to introduce one more library – Stanford Log-linear Part-Of-Speech Tagger.

To compile stanford-postagger.jar to .NET assembly you need nothing special, just follow the steps from my previous post “NLP: Stanford Parser with F# (.NET)“. Also you can download already compiled version from GitHub.

What is Stanford POS Tagger?nlp-logo-navbar

A Part-Of-Speech Tagger (POS Tagger) is a piece of software that reads text in some language and assigns parts of speech to each word (and other token), such as noun, verb, adjective, etc., although generally computational applications use more fine-grained POS tags like ‘noun-plural’.

Read more about Part-of-speech tagging on Wikipedia.

Let’s play!

I was really surprised with performance of .NET version of Stanford POS Tagger.  It is fast enough! If you do not need advanced syntactic dependencies between the words and part-of-speech information is enough, then do not use Stanford Parser, Stanford POS Tagger is just what you need.

module TaggerDemo

open java.util

open edu.stanford.nlp.ling
open edu.stanford.nlp.tagger.maxent;

open IKVM.FSharp
let model = @"..\..\..\..\StanfordNLPLibraries\stanford-postagger\models\wsj-0-18-left3words.tagger"

let tagReader (reader:Reader) =
    let tagger = MaxentTagger(model)
    |> Collections.toSeq
    |> Seq.iter (fun sentence ->
        let tSentence = tagger.tagSentence(sentence :?> List)
        printfn "%O" (Sentence.listToString(tSentence, false))

let tagFile (fileName:string) =
    tagReader (new BufferedReader(new FileReader(fileName)))
let tagText (text:string) =
    tagReader (new StringReader(text))

As you see, it is really simple to use. We instantiate MaxentParser and initialize it with wsj-0-18-left3words.tagger model. After that we are loading text, tokenize it to sentences and tag sentences one by one.

Let’s test tagger on the F# Software Foundation Mission Statement =).

Mission Statement

The mission of the F# Software Foundation is to promote, protect, and advance the F# programming language, and to support and facilitate the growth of a diverse and international community of F# programmers.

Tagging result:

Mission/NNP Statement/NNP 
The/NNP mission/NN of/IN the/DT F/NN #/# Software/NNP Foundation/NNP is/VBZ 
to/TO promote/VB ,/, protect/VB ,/, and/CC advance/NN the/DT F/NN #/# 
programming/VBG language/NN ,/, and/CC to/TO support/VB and/CC facilitate/VB 
the/DT growth/NN of/IN a/DT diverse/JJ and/CC international/JJ community/NN 
of/IN F/NN #/# programmers/NNS ./.

Descriptions of POS tags you can find here.

NLP: Stanford Parser with F# (.NET)

Update (2014, January 3): Links and/or samples in this post might be outdated. The latest version of samples are available on new Stanford.NLP.NET site.

All code samples from this post are available on GitHub.

Natural Language Processing is one more hot topic as Machine Learning. For sure, it is extremely important, but poorly developed.

What we have in .NET?

Lets start from what we already have.

Looks really bad. It is hard to find something that really useful. Actually we have one more option, which is IKVM.NET. With IKVM.NET we should be able to use most of Java-based NLP frameworks. Let’s try to import Stanford Parser to .NET.

IKVM.NET overview.

IKVM.NET is an implementation of Java for Mono and the Microsoft .NET Framework. It includes the following components:

  • A Java Virtual Machine implemented in .NET
  • A .NET implementation of the Java class libraries
  • Tools that enable Java and .NET interoperability

Read more about what you can do with IKVM.NET.

About Stanford NLP nlp-logo-navbar

The Stanford NLP Group makes parts of our Natural Language Processing software available to the public. These are statistical NLP toolkits for various major computational linguistics problems. They can be incorporated into applications with human language technology needs.

All the software we distribute is written in Java. All recent distributions require Sun/Oracle JDK 1.5+. Distribution packages include components for command-line invocation, jar files, a Java API, and source code.

IKVM .jar to .dll compilation

First of all, we need to download and install IKVM.NET. You can do it from SourceForge. The next step is to download Stanford Parser (current latest version is 2.0.4 from 2012-11-12). Now we need to compile stanford-parser.jar to .NET assembly. You can do it with the following command:

ikvmc.exe stanford-parser.jar

If you need a strongly typed one, then you should do two more steps.

ildasm.exe /all / stanford-parser.dll
ilasm.exe /dll /key=myKey.snk

No signed stanford-parser.dll is available on GitHub.

Let’s play!

That’s all! Now we are ready to start playing with Stanford Parser.  I want to show up here one of the standard examples(ParserDemo.fs), the second one is available on the GitHub with other sources.

let demoAPI (lp:LexicalizedParser) =
  // This option shows parsing a list of correctly tokenized words
  let sent = [|"This"; "is"; "an"; "easy"; "sentence"; "." |]
  let rawWords = Sentence.toCoreLabelList(sent)
  let parse = lp.apply(rawWords)

  // This option shows loading and using an explicit tokenizer
  let sent2 = "This is another sentence.";
  let tokenizerFactory = PTBTokenizer.factory(CoreLabelTokenFactory(), "")
  use sent2Reader = new StringReader(sent2)
  let rawWords2 = tokenizerFactory.getTokenizer(sent2Reader).tokenize()
  let parse = lp.apply(rawWords2)

  let tlp = PennTreebankLanguagePack()
  let gsf = tlp.grammaticalStructureFactory()
  let gs = gsf.newGrammaticalStructure(parse)
  let tdl = gs.typedDependenciesCCprocessed()
  printfn "\n%O\n" tdl

  let tp = new TreePrint("penn,typedDependenciesCollapsed")

let main fileName =
  let lp = LexicalizedParser.loadModel(@"..\..\..\..\StanfordNLPLibraries\stanford-parser\stanford-parser-2.0.4-models\englishPCFG.ser.gz")
  match fileName with
  | Some(file) -> demoDP lp file
  | None -> demoAPI lp

What we are doing here? First of all, we instantiate LexicalizedParser and initialize it with englishPCFG.ser.gz model. Then we create two sentences. First is created from already tokenized string(from string array, in this sample). The second one is created from the string using PTBTokenizer. After that we create lexical parser that is trained on the Penn Treebank corpus. Finally, we are parsing our sentences using this parser. Result output can be found below.

Loading parser from serialized file ..\..\..\..\StanfordNLPLibraries\
stanford-parser\stanford-parser-2.0.4-models\englishPCFG.ser.gz ... 
done [1.5 sec].
 (NP (DT This))
 (VP (VBZ is)
 (NP (DT an) (JJ easy) (NN sentence)))
 (. .)))

[nsubj(sentence-4, This-1), cop(sentence-4, is-2), det(sentence-4, another-3), 
root(ROOT-0, sentence-4)]
 (NP (DT This))
 (VP (VBZ is)
 (NP (DT another) (NN sentence)))
 (. .)))
nsubj(sentence-4, This-1)
cop(sentence-4, is-2)
det(sentence-4, another-3)
root(ROOT-0, sentence-4)

I want to mention one more time, that full source code is available at the fsharp-stanford-nlp-samples GitHub repository. Feel free to use and extend it.

FSharp.ML – industry needs. (Machine Learning for .NET)

Machine Learning is a hot topic for nowadays. ML is a core part of Data Analysis and an auxiliary tool in a lot of domains (NLP, search engines, e-commerce solutions and etc). Many ML related courses available on the Coursera  in “Statistics, Data Analysis, and Scientific Computing” and “Computer Science: Artificial Intelligence, Robotics, Vision” sections. Kaggle holds ML competitions more and more often.

Java has some popular and recognized ML libraries such as Mahout and Weka, but it is much harder to find .NET high performance ML library (which does not run on the IKVM.NET).

What is already available in .NET World?

As Don Syme said, it would be cool to have an independent comparison of already available ML libraries. We need to understand what is suitable for what needs.

Also I want to mention some most promising of them:

What can we do?

We are talking that F# is great for data scientists and statisticians and so it is! We still do not have mature F# ML library, but we have a lot of posts about ML and a lot of interest in this domain:

It is time to put it all together into FShapr.ML.  This can be done in two parts: a complete functional ML framework plus a collection of useful customizable samples.

F#/.NET function minimization (optimization)

I have done some research on function minimization algorithms implemented on .NET. Short summary can be found below.

Gradient descent

Gradient descent is one of the simplest function optimization algorithms. You can implement it by yourself or using one of the following articles:


DotNumerics is a Numerical Library for .NET. The library is written in pure C# and has more than 100,000 lines of code with the most advanced algorithms for Linear Algebra, Differential Equations and Optimization problems.

Unfortunately, dotNumerics does not have a detailed documentation. Let’s go through all minimization algorithms implemented in dotNumerics. First of all, we implement banana function from simplex method example available on the library site.

#r @"DotNumerics.dll"
open System
open DotNumerics.Optimization

//f(a,b) = 100*(b-a^2)^2 + (1-a)^2
let BananaFunction (x: float array) =
    100.0 * Math.Pow((x.[1] - x.[0] * x.[0]), 2.0) + Math.Pow((1.0 - x.[0]), 2.0)

Downhill Simplex

Downhill Simplex method of Nelder and Mead

The key advantage of Downhill Simplex method is that it does not require the gradient function. All you need is a function and an initial guess.

let initialGuess = [|0.1; 2.0|]

let simplexMin =
    let simplex = Simplex();

We have a bit of control over the evaluation model. We can restrict MaxFunEvaluations and specify custom Tolerance in Simplex model. In this case, model instantiation looks like below.

    let simplex = Simplex(MaxFunEvaluations=10000, Tolerance=1e-5);

Truncated Newton

“A Survey of Truncated-Newton Methods”, Journal of Computational and Applied Mathematics.

All other algorithms require gradient function to make calculation.

//f'a(a,b) = (100*(b-a^2)^2 + (1-a)^2)'a = 100*2*(b-a^2)*(-2a) - 2*(1-a)
//f'b(a,b) = (100*(b-a^2)^2 + (1-a)^2)'b = 100*2*(b-a^2)
let BananaFunctionGradient (x: float array) =
    [|100.0 * 2.0 * (x.[1] - x.[0] * x.[0]) * (-2.0 * x.[0]) - 2.0 * (1.0 - x.[0]);
      100.0 * 2.0 * (x.[1] - x.[0] * x.[0])|]

let newtonMin =
    let newton = TruncatedNewton()

Truncated Newton algorithm has three more configuration parameters than Downhill Simplex: Accuracy, MaximunStep and SearchSeverity.


Limited memory Broyden–Fletcher–Goldfarb–Shanno method

let bfgsMin =
    let lbfgsb = L_BFGS_B()
    lbfgsb.ComputeMin(BananaFunction, BananaFunctionGradient, initialGuess);

L-BFGS-B has one more configuration parameters than Downhill Simplex – it is AccuracyFactor.


Below you can find evaluation results received from models with default parameters.

Real: 00:00:00.024, CPU: 00:00:00.062, GC gen0: 0, gen1: 0, gen2: 0
val simplexMin : float [] = [|0.999999998; 0.9999999956|]
Real: 00:00:00.074, CPU: 00:00:00.078, GC gen0: 0, gen1: 0, gen2: 0
val newtonMin : float [] = [|0.9999999999; 0.9999999999|]
Real: 00:00:00.137, CPU: 00:00:00.140, GC gen0: 0, gen1: 0, gen2: 0
val bfgsMin : float [] = [|1.0; 1.0|]