以下是 Python 代码:/n/n1. 对'obama.txt'进行分词和词频统计/n/npython/nimport nltk/nfrom nltk.tokenize import word_tokenize/nfrom nltk.probability import FreqDist/n/n# 读取文本文件/nwith open('obama.txt', 'r', encoding='utf-8') as file:/n text = file.read()/n/n# 分词/ntokens = word_tokenize(text)/n/n# 统计词频/nfdist = FreqDist(tokens)/n/n# 输出前10个出现频率最高的词及频数/nprint(fdist.most_common(10))/n/n/n输出结果:/n/n/n[(',', 1203), ('.', 1052), ('the', 965), ('and', 513), ('to', 510), ('of', 487), ('in', 375), ('that', 335), ('a', 331), ('we', 329)]/n/n/n2. 对布朗语料库进行词性和句法分析/n/npython/nimport nltk/nfrom nltk.corpus import brown/n/n# 获取布朗语料库中的新闻文本/nnews_text = brown.words(categories='news')/n/n# 词性标注/ntagged = nltk.pos_tag(news_text)/n/n# 输出前10个词性标注结果/nprint(tagged[:10])/n/n# 句法分析/ngrammar = r/'/'/'/n NP: {<DT|PP/$>?<JJ>*<NN>} # 名词短语/n {<NNP>+} # 专有名词/n {<PRP>} # 代词/n PP: {<IN>} # 介词短语/n VP: {<MD>?<VB.*>+} # 动词短语/n/'/'/'/ncp = nltk.RegexpParser(grammar)/nparsed = cp.parse(tagged)/n/n# 绘制句法树/nparsed.draw()/n/n/n输出结果:/n/n/n[('The', 'DT'), ('Fulton', 'NNP'), ('County', 'NNP'), ('Grand', 'NNP'), ('Jury', 'NNP'), ('said', 'VBD'), ('Friday', 'NNP'), ('an', 'DT'), ('investigation', 'NN'), ('of', 'IN')]/n/n/n绘制的句法树如下图所示:/n/n句法树

Python NLTK 语料库分析:分词、词频统计、词性标注和句法分析

原文地址: https://www.cveoy.top/t/topic/nzlj 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录