2011-02-13 45 views
9

我想用斯坦福NLP解析器解析一个句子列表。 我的清单是ArrayList,我该如何解析与LexicalizedParser的所有清单?如何解析句子列表?

我想从每个句子来获得这种形式:

Tree parse = (Tree) lp1.apply(sentence); 

回答

1

其实从斯坦福NLP文档提供如何分析句子样本。

您可以找到的文档here

+2

另请参阅解析器附带的ParserDemo示例。您可以直接在作为句子的字符串上调用apply()。 – 2012-01-15 16:38:45

20

虽然一个可以挖掘到的文档,我将在这里提供的代码在SO以来,特别是链接移动和/或死亡。这个特定的答案使用整个管道。如果对整个管道不感兴趣,我会在一秒之内提供一个备选答案。

下面的例子是使用斯坦福管道的完整方式。如果对合作解决方案不感兴趣,请从第三行代码中删除dcoref。因此,在下面的示例中,如果您只是将文本馈送到文本主体(文本变量)中,则管道会为您(ssplit注释器)分割文本。只有一句话?那么,没关系,你可以将它作为文本变量来提供。

// creates a StanfordCoreNLP object, with POS tagging, lemmatization, NER, parsing, and coreference resolution 
    Properties props = new Properties(); 
    props.put("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref"); 
    StanfordCoreNLP pipeline = new StanfordCoreNLP(props); 

    // read some text in the text variable 
    String text = ... // Add your text here! 

    // create an empty Annotation just with the given text 
    Annotation document = new Annotation(text); 

    // run all Annotators on this text 
    pipeline.annotate(document); 

    // these are all the sentences in this document 
    // a CoreMap is essentially a Map that uses class objects as keys and has values with custom types 
    List<CoreMap> sentences = document.get(SentencesAnnotation.class); 

    for(CoreMap sentence: sentences) { 
     // traversing the words in the current sentence 
     // a CoreLabel is a CoreMap with additional token-specific methods 
     for (CoreLabel token: sentence.get(TokensAnnotation.class)) { 
     // this is the text of the token 
     String word = token.get(TextAnnotation.class); 
     // this is the POS tag of the token 
     String pos = token.get(PartOfSpeechAnnotation.class); 
     // this is the NER label of the token 
     String ne = token.get(NamedEntityTagAnnotation.class);  
     } 

     // this is the parse tree of the current sentence 
     Tree tree = sentence.get(TreeAnnotation.class); 

     // this is the Stanford dependency graph of the current sentence 
     SemanticGraph dependencies = sentence.get(CollapsedCCProcessedDependenciesAnnotation.class); 
    } 

    // This is the coreference link graph 
    // Each chain stores a set of mentions that link to each other, 
    // along with a method for getting the most representative mention 
    // Both sentence and token offsets start at 1! 
    Map<Integer, CorefChain> graph = 
     document.get(CorefChainAnnotation.class); 
1

所以如许,如果你不想访问完整的斯坦福管道(虽然我认为是推荐的方法),你可以直接与LexicalizedParser类工作。在这种情况下,您可以下载最新版本的Stanford Parser(而另一个将使用CoreNLP工具)。确保除了解析器jar之外,还有适合您需要的解析器的模型文件。示例代码:

LexicalizedParser lp1 = new LexicalizedParser("englishPCFG.ser.gz", new Options()); 
String sentence = "It is a fine day today"; 
Tree parse = lp.parse(sentence); 

注意这适用于解析器的3.3.1版本。