2017-04-12 167 views
1

我在java上使用openNLP API来处理我正在处理的项目。事情是,在我的程序中,我只处理单词而没有对应关系。 代码:openNLP java - 多项葡萄牙语NER

String line = input.nextLine(); 


      InputStream inputStreamTokenizer = new FileInputStream("/home/bruno/openNLP/apache-opennlp-1.7.2-src/models/pt-token.bin"); 
      TokenizerModel tokenModel = new TokenizerModel(inputStreamTokenizer); 

      //Instantiating the TokenizerME class 
      TokenizerME tokenizer = new TokenizerME(tokenModel); 
      String tokens[] = tokenizer.tokenize(line); 


      InputStream inputStream = new FileInputStream("/home/bruno/openNLP/apache-opennlp-1.7.2-src/models/pt-sent.bin"); 
      SentenceModel model = new SentenceModel(inputStream); 

      //Instantiating the SentenceDetectorME class 
      SentenceDetectorME detector = new SentenceDetectorME(model); 

      //Detecting the sentence 
      String sentences[] = detector.sentDetect(line); 

      //Loading the NER-location model 
      //InputStream inputStreamLocFinder = new FileInputStream("/home/bruno/openNLP/apache-opennlp-1.7.2-src/models/en-ner-location.bin");  
      //TokenNameFinderModel model = new TokenNameFinderModel(inputStreamLocFinder); 

      //Loading the NER-person model 
      InputStream inputStreamNameFinder = new FileInputStream("/home/bruno/TryOllie/data/pt-ner-floresta.bin");  
      TokenNameFinderModel model2 = new TokenNameFinderModel(inputStreamNameFinder); 

      //Instantiating the NameFinderME class 
      NameFinderME nameFinder2 = new NameFinderME(model2); 

      //Finding the names of a location 
      Span nameSpans2[] = nameFinder2.find(tokens); 

      //Printing the spans of the locations in the sentence 
      //for(Span s: nameSpans)   
      //System.out.println(s.toString()+" "+tokens[s.getStart()]); 

      Set<String> x = new HashSet<String>(); 
      x.add("event"); 
      x.add("artprod"); 
      x.add("place"); 
      x.add("organization"); 
      x.add("person"); 
      x.add("numeric"); 

      SimpleTokenizer simpleTokenizer = SimpleTokenizer.INSTANCE; 
      Span[] tokenz = simpleTokenizer.tokenizePos(line); 
      Set<String> tk = new HashSet<String>(); 
      for(Span tok : tokenz){ 
       tk.add(line.substring(tok.getStart(), tok.getEnd())); 
      } 

      for(Span n: nameSpans2) 
      { 
       if(x.contains(n.getType())) 
        System.out.println(n.toString()+ " -> " + tokens[n.getStart()]); 

      } 

输出我得到的是:

Ficheiro com extensao: file.txt 
[1..2) event -> choque[3..4) event -> cadeia[6..7) artprod -> viaturas[13..14) event -> feira[16..18) place -> Avenida[20..21) place -> Porto[24..25) event -> incêndio[2..3) event -> acidente[5..6) artprod -> viaturas[44..45) organization -> JN[46..47) person -> António[47..48) place -> Campos[54..60) organization -> Batalhão[1..2) event -> acidente[6..8) numeric -> 9[11..12) place -> Porto-Matosinhos[21..22) event -> ocorrência[29..30) artprod -> .[4..5) organization -> Sapadores[7..10) organization -> Bombeiros[14..15) numeric -> 15 

什么即时试图做的是一个多学期NER,像安东尼坎波斯是一个人,不是人 - >安东尼奥和地点 - > Campos或组织 - > Universidade Nova de Lisboa

回答

0

Stanford-NLP只处理单个词。即使你给coreNLP一个句子,它也会分解成令牌并逐个处理它们。我从来没有听说过NER适用于多学期。

+0

林打破了文成句子,记号化每一个字,所以我的程序只能看每个字分开,这哪里是我的问题在于,我必须来标记,但我需要看单词,例如找到Antonio Antonio Campos作为一个人找到,而不是Antonio作为一个人,Campos作为一个地方,或另一个人...... 这同样适用于像Fernando Pessoa大学这样的组织,我想找到它作为一个组织,而不是大学作为组织,费尔南多作为人和佩索阿作为其他人 – Break

+0

好吧,我明白你的问题,但这是如何**斯坦福NLP **的工作。 NER只在完成这个过程之后才工作(tokenize,ssplit,pos,引理)。所以很明显NER只处理一个小字。可能是这[链接](https://stanfordnlp.github.io/CoreNLP/dependencies.html)给你的想法。 –

+0

所以,我只能看单个单词,这意味着我无法解决我与那个NER的问题? – Break

1

您正在打印错误的数据结构。范围getSart和getEnd将指向属于实体一部分的标记序列。您只打印第一个标记。

此外,您正在做句子检测前的标记。

试试下面的代码:

// load the models outside your loop 
InputStream inputStream = 
    new FileInputStream("/home/bruno/openNLP/apache-opennlp-1.7.2-src/models/pt-sent.bin"); 
SentenceModel model = new SentenceModel(inputStream); 

//Instantiating the SentenceDetectorME class 
SentenceDetectorME detector = new SentenceDetectorME(model); 

InputStream inputStreamTokenizer = 
    new FileInputStream("/home/bruno/openNLP/apache-opennlp-1.7.2-src/models/pt-token.bin"); 
TokenizerModel tokenModel = new TokenizerModel(inputStreamTokenizer); 
//Instantiating the TokenizerME class 
TokenizerME tokenizer = new TokenizerME(tokenModel); 


//Loading the NER-person model 
InputStream inputStreamNameFinder = new FileInputStream("/home/bruno/TryOllie/data/pt-ner-floresta.bin"); 
TokenNameFinderModel model2 = new TokenNameFinderModel(inputStreamNameFinder); 

//Instantiating the NameFinderME class 
NameFinderME nameFinder2 = new NameFinderME(model2); 

String line = input.nextLine(); 

while(line != null) { 

    // first we find sentences 
    String sentences[] = detector.sentDetect(line); 

    for (String sentence : 
     sentences) { 
    // now we find the sentence tokens 
    String tokens[] = tokenizer.tokenize(sentence); 

    // now we are good to apply NER 
    Span[] nameSpans = nameFinder2.find(tokens); 

    // now we can print the spans 
    System.out.println(Arrays.toString(Span.spansToStrings(nameSpans, tokens))); 

    line = input.nextLine(); 
    } 
} 
+0

谢谢你的回答wcolen,我已经tryed,解决办法,但输出是: Ficheiro COM extensao:file.txt的 [Ljava.lang.String; @ 6f2b958e [Ljava.lang.String; @ 1eb44e46 [ Ljava.lang.String; @ 6504e3b2 [Ljava.lang.String; @ 515f550a [Ljava.lang.String; @ 626b2d4a [Ljava.lang.String; @ 5e91993f 异常在线程 “主” java.util.NoSuchElementException :没有找到行 \t在java.util.Scanner.nextLine(Scanner.java:1540) \t在ner.Wiki.main(Wiki.java:76) 我tryed其他方法,如的toString输出。和它和以前一样,我在这里做错了什么?谢谢 – Break

+0

'Span.spansToStrings(...)'返回一个字符串数组。再次使用'Arrays.toString(Span.spansToStrings(nameSpans,tokens))' – wcolen