In today's study group, Panos discussed a paper that focuses on Forensic Stylometry, a form of Authorship Attribution. The paper proposes the Classify-Verify method and it can be found at: www.stolerman.net. There are 2 basic categories in Authorship Attribution (AAtr): The Closed World problem and the Open World problem. The former category implies that the author is in the suspect set. In Open World problems the author might not be in the set. For this reason, the authors merge CLASSIFICATION and VERIFICATION. They utilise an existing distance-based authorship VERIFICATION method but they also add per-feature standard deviations normalisation and per-author
threshold normalisation to the scheme.
Stylometry is used to analyse anonymous written communications. The goal is to de-anonymize them. Traditional methods need a suspect set to do that in a reliable way. The paper bypasses the strong assumption that the author is inside the suspect set. The current state-of-the-art methods rely basically on AI techniques. They can identify with an accuracy of 90% individuals in sets of 50 people. However, there are certain restrictions to the already proposed algorithms. In Authorship Attribution, given a document D (unknown authorship) and a set of authors (A = {A1, A2, … , An}), we have to determine the author Ai of D. With authorship verification, given D and A we must decide if D is written by A. The paper suggests merging the two worlds: Given a D of unknown authorship and documents written by a set of known Authors, determine the author Ai ∈ A of D, OR highlight that the author of D is not in the Authors Set.
The authors are using two different corpora for their experiments: EBG corpus (45 different authors/at least 6,500 words per author) + adversarial documents and ICWSM 2009 Spinn3r Blog (blog corpus, around 44M blog posts). For classification they use a CLOSED world SVM classifier provided by Weka (SMO SVM with complexity parameter C = 1) and they choose only
one type of feature sets from the Writeprints feature set, which was used to quantify the EBG corpus (over 90% accuracy for 50 authors). The evaluation was done using 10-fold cross validation. (Measured on the most common k, 1-5grams, 50 < k < 1000, step = 50). Finally, they chose the 500 most common character bigrams (they call it <500,2>-chars) as their feature set. FEATURE EXTRACTION is done using JStylo and JGAAP authorship attribution APIs.
For verification they use Classifier-induced verifiers that require a closed-world classifier and use its class output for verification. Another family of verifiers is the Standalone verifiers, which rely on a model built using author training data, independent of other classifiers and authors. Classifier-Induced Verification use distance-based classifiers. A higher confidence in an author may indicate that the author is in the suspect set while a lower confidence may indicate that he is not. So, after the classification, verification is based on setting an acceptance threshold t. We thus have to measure the confidence of the classifier and accept the classification if it is above t. We use probabilities in this category. The paper evaluates three different verification methods: P1,
P1 – P2 –Diff and Gap-Conf.
For Standalone Verification the evaluation of the Classify-Verify model is done using either Distractorless verification or the proposed Sigma Verification. For the basic Verification (V: Distractorless) we use distance combined with a threshold: The method suggests to set an acceptance threshold t, model as feature vectors document D and author A, measure distance between them and decide that D is written by A if distance is below t.
The proposed Sigma Verification scheme applies two adjustments to the former method. a) Vσ: Per-Feature SD Normalisation, which is based on the variance of the author’s writing style (uses standard deviation of features) and b) Vα: Per-Author Threshold Normalisation. The evaluation of the former approaches indicates that there is no specific verification method preferable than the other and the selection of a verifier
should rely on empirical testing. Sometimes it happens V to outperform Vσ or Vα and the opposite.
The proposed C-V algorithm is an abstaining classifier. In other words, it is a classifier that refrains from classification in certain cases to reduce misclassifications. Basically, the authors expand the closed world authorship problem to open world adding another class: the class ‘UNKNOWN’. Therefore, closed-world classification is applied on D and A = {A1, A2, … , An} and the output is given to the verifier. The verifier determines whether to accept Ai or to reject it (⊥) because C-V is a classifier over the suspect set A∪{⊥}. The threshold of the C-V verifier can be determined as follows:
- Manually: Manually set by user (make classifier strict or relaxed),
- p-Induced Threshold: The threshold can be set empirically over the training set to the one that maximises the target measurement, e.g. F1-score, in an automated process,
- in-set/not-in-set-Robust: If p is not known we examine various p and various t. There will be a point where the curves intersect. (p is the likelihood the author of the document to be in the set).
The authors use 2 different settings for evaluation: when the authors of the documents are in the set of suspects (in-set) and when they are not (not-in-set). One assumption they make is that if D is written by A, classified as B but the verifier replaces B with ⊥, they consider the result as true. For their CLASSIFICATION phase they train n (n-1)-class classifiers using the SMO SVM as discussed previously. For the VERIFICATION phase they evaluate several methods: A standalone method for each corpus and all the classifier-induced methods: Gap-Conf, P1, P1-P2-Diff. Also, they use F1-score because it provides a balanced measurement of precision and recall. For the threshold they use two automatic verification methods: If p is known they use the p-induces threshold that maximises the F1-score on the training set (p = 0.5). If p is unknown they use the robust threshold p-F1R. As baseline they compare F1-scores with 10-fold cross validation results of closed-world classification using the SMO SVM with the <500,2>-chars feature set. Finally, for the Adversarial Settings Evaluation, they train their models on non-adversarial documents of EBG and test them on the imitation documents to test how well C-V thwarts attacks.
Their results can be aggregated as follows: For both EBG and the blog corpora, 0.5-F1 results significantly outperform 0.5-Base using any of the underlying verification methods, at a confidence level of p-val < 0.01. Generally, the underlying verifiers (leading to an overall increase in F1-score) thwarted a large proportion of misclassifications. Among all verification methods, P1-P2-Diff is proven the most preferable verifier to use, since it consistently outperforms the other methods across almost all values of p for both corpora, which implies it is robust to domain variation. For adversarial settings, they prove that the closed-world classifier is highly vulnerable to these types of attacks but C-V manages to thwart the majority of the attacks. In addition, the results suggest that classifier-induced verifiers consistently outperform the standalone ones. Overall, the method is able to replace wrong assertions with more honest and useful statements of “unknown”.
No comments:
Post a Comment