NLTK method "concordance" produces a list of the words used in the text, with the passages where they are used. You can call it if you want all occurrences of the word "surprise" in Jane Austen's "Emma", with snippets of context, as shown in this slide.
This was the fourth of the five meeting series on natural language processing, hosted by Women Who Code Austin at Rackspace. The instructor, Diana, introduced us to the basics of natural language processing. She did several demos of simple text analysis one can do with Python Natural Language Toolkit (NLTK). Examples of such actions are reading in the text, tokenizing it, and tagging parts of speech, which can involve a lot of interesting ambiguity. From this example you can also see that NLTK has corpora of texts from the Gutenberg project, which is pretty handy.
Then we ventured deeper into natural language processing to discuss where and how it is used, including such fields as sentiment analysis. Diana talked about challenges present in those fields, such as for example determining similarity between concepts. We need to be able to handle that so as to extract accurate meanings from texts. This is where ontologies can be handy.