Statistical approaches to Named Entity Recognition are trained for specific types of text and sometimes deliver poor performance on others, either due to language or formatting. Purely empirical approaches, like the one presented here, do not have this limitation and may thus be better suited for the messy data of digital investigations, as well as being easier to explain. Code and experimental corpus is made available. Feel free to send me an email if you would like to get the model.
Named Entity Recognition (NER) is the process of extracting rigid designators from unstructured text. These entities are typically organized in classes like people, organizations and locations.
NER may be valuable during digital investigations for a number of reasons. First, generating a histogram of names in a datasets provides a quick overview that may help guide further analysis. By combining names with other types of entities, like phone numbers and e-mail addresses, we may also generate a network of entities belonging to the same person and thus create a more complete social network from our data.
However, Conditional Random Field (CRF) an other statistical methods for NER can be quite annoying. They don’t always recognize all instances of the exact same name when it’s in different contexts (different content before and after). An example from the Enron dataset using Stanford CRF NER with the english.all.3 classifier: The text
Christopher S Smart yields a true positive, while
firstname.lastname@example.org; Christopher S Smart; email@example.com; does not.
Assistant to Louise Kitchen yields a true positive, while
cc: Mel Dozier/ECT, Louise Kitchen/LON@ECT Subject: yields
Mel Dozier/ECT”, Louise, messing up both names.
Historically, approaches to Named Entity Recognition have been based on either statistical models or expert rules, both usually supported by some empirical data. In this post I present an almost entirely empirical approach on the rationale that the best way to find all names is to know every name. This is probably impossible, but mining the Internet for names makes it possible to get close enough to make it interesting. I call the method Namefinder.
Some benefits of a context independent and empirical approach during a digital investigation, are that all instances of the same name are guaranteed to be recognized, and that the results (IMO) are easier to explain (either the name is in the model or it’s not).
In contrast to other approaches, this one is limited to people only. Furthermore, valid names must consist of at least two words.
Experiments are performed against three manually labelled datasets and compared against Stanford CRF NER with classifier english.all.3. The 500newsgoldstandard  dataset is already labelled, but it’s missing a whole bunch of labels (e.g. “Justin Thomas” in record id=14) and has been adjusted. The other two are excerpts from the Enron corpus, and headlines retrieved from the Norwegian broadcaster NRK. The dataset is available here.
 https://github.com/AKSW/n3-collection (Copyright AKSW) (last accessed: 2016-09-03)
This method relies completely on the supplied model. In cases where I already know the relevant names, those may be sufficient as a model. But most often this is not the case, and then I need a really big list. I’m not going to say where, but the list used here is from more than one open source on the Internet. It consists of more than 100 million unique names and 10 million unique words (tokens).
10 million tokens are impractical for three reasons; it’s big (over 220MB file), it consequentially takes longer to load the model, and it produces a lot of false positives. The model must therefore be pruned.
The goal of pruning is to minimize the number of tokens, maximize the true positive rate and minimize false positive rate. This is done in three steps, called mixed_case, infrequent and blacklist:
|Step||Goal||# Tokens||Model size||Avg. recall *||Avg. precision *|
|0. original||Max. recall||10046502||222MB||0.94||0.47|
|1. mixed_case||Max. precision||10008311||221MB||0.91||0.65|
|2. infrequent||Min. size||1125447||22MB||0.88||0.65|
|3. blacklist||Max. precision||1125438||22MB||0.88||0.67|
* Across the three evaluated datasets.
The mixed case pruning was done by finding and counting all lower cased words in the CoNLL 2002 corpus . A token is pruned if its relative log frequency is significantly higher as lowercased than in the model.
All infrequent token, here those with relative log frequency lower than 0.07 was then pruned.
Finally, nine manually blacklisted tokens (
hallo hey hi ms mr sen gov dr hei) was pruned.
The result is a 22MB large model with just above 1 million unique tokens.
 http://www.cnts.ua.ac.be/conll2002/ner/ (Last accessed: 2016-09-03)
The approach is to compute the probability that two or more closely position, capitalized words is a name. It is done by first computing a probability score for each token x, as show here:
Sx is the start of the current token and Ex-1 is the end of the previous token. The probability score is thus also based on the distance between the two tokens. This allows for some content between the tokens, like abbreviations. The substring between two tokens, called gap, is also used to detect invalid strings, eliminating any potential results.
To allow for some blind tokens, a luck attribute is used, so that if a token is not in the dataset, it still has a chance of being included. The rationale is that it should be included if it many times occur together with valid tokens.
The probability for a set of tokens is simply their average probability.
The code for doing name recognition with
namefinder.py given a model is well under 100 lines of code:
The experiments are performed on three labelled datasets, nrk, enron, and news500. Note that the nrk dataset is in Norwegian so classifiers trained for english text are expected to under perform. Still, in investigations we may not in advance which languages we are dealing with, so it is not an unrealistic scenario. Some statistics on the datasets:
|Dataset||# Records||# Unique names|
For these experiments, namefinder is compared against Stanford CRF NER (2015-12-09)  with classifier english.all.3class.distsim.crf.ser.gz. This classifier detect single word names, however these are excluded before computing results to produce comparable results.
Namefinder runs with parameters threshold 0.2 and luck 0.1.
Results are computed based on values true_positive, false_positive and actual_positive. All three values hold distinct names, so it does not affect the results how many times a potential name is recognized. Based on these values, the statistical measures recall (ratio of actual names found), precision (ratio between true and false positives) and F1 score (their harmonic average) is computed.
 http://nlp.stanford.edu/software/CRF-NER.shtml (Last accessed: 2016-09-03)
|stanford crf ner||0.64||0.66||0.65|
|stanford crf ner||0.91||0.94||0.93|
|stanford crf ner||0.38||0.60||0.47|
The results above show that namefinder maintains a more stable high recall, at the cost of lower precision. I argue that the false positives are still manageable. The tradeoff between recall and precision depends on the selected parameters threshold and luck.
One of the reasons Stanford NER performs so poorly on the enron data is that many names are on a format where the family name comes first, like
Bjelland, Petter, which it does not recognize. It can surely be trained to also accept such formats, but it underlines the point about unpredictability.
There are several reasons why namefinder has a pretty low precision. One of them is that it generates every subset of a name, so
Petter Chr. Bjelland produces
Chr. Bjelland and
Petter Chr. Bjelland. Two of those will likely be false positives.
Namefinder is (IMO) pretty fast. It takes 1.2 seconds to load the model, and it searches Peter Norvigs big.txt , which is 6.2MB of text in 3.2 seconds. On that file it means almost 300K words per second.
The whole model must fit in memory, and currently the whole text is as well. During the search phase the memory use is constant as a ring buffer is used to store the last N tokens.
It may seem weird that the CoNLL 2002 dataset is not part of the experiments. As the challenge includes single word tokens, it seems to me like a fair comparison is not possible. Either I would have to remove the singles from that dataset (which could skew the results in my favor), or I would have to count them all as false negatives (which doesn’t sound fair either).
Finally, as statistical and empirical approaches may complement each other, it’s a good idea to combine the two during investigations.
 http://norvig.com/big.txt (Last accessed: 2016-09-03)