Text Processing Tutorial with RapidMiner

     I know that a while back it was requested (on either Piazza or in class, can’t remember) that someone post a tutorial about how to process a text document in RapidMiner and no one posted back. In this tutorial, I will try to fulfill that request by showing how to tokenize and filter a document into its different words and then do a word count for each word in a text document (I am essentially showing how to do the same assignment in HW 2 (plus filtering) but through RapidMiner and not AWS).

1) I first downloaded my document (The Entire Works of Mark Twain) through Project Gutenberg’s websiteas a text document. Save the document in a file on your computer.

2) Open RapidMiner and click “New Process”. On the left hand pane of your screen, there should be a tab that says “Operators”- this is where you can search and find all of the operators for RapidMiner and its extensions. By searching the Operators tab for “read”, you should get an output like this (you can double click on the images below to enlarge them):

There are multiple read operators depending on which file you have, and most of them work the same way. If you scroll down, there is a “Read Documents” operator. Select this operator and enter it into your Main Process window by dragging it. When you select the Read Documents operator in the Main Process window, you should see a file uploader in the right-hand pane.

Select the text file you want to use.

3) After you have chosen your file, make sure that the output port on the Read Documents operator is connected to the “res” node in your Main Process. Click the “play” button to check that your file has been received correctly. Switch to the results perspective by clicking the icon that looks like a display chart above the “Process” tab at the top of the Main Process pane. Click the “Document (Read Document)” tab. Your output text should look something like this depending on the file you have chosen to process:

4) Now we will move on to processing the document to get a list of its different words and their individual count. Search the Operators list for “Process Documents”. Drag this operator the same way as you did for the “Read Documents” operator into the main pane.
Double click the Process Documents operator to get inside the operator. This is where we will link operators together to take the entire text document and split it down into its word components. This consists of several operators that can be chosen by going into the Operator pane and looking at the Text Processing folder. You should see several more folders such as “Tokenization”, “Extraction”, “Filtering”, “Stemming”, “Transformation”, and “Utility”. These are some of the descriptions of what you can do to your document.  The first thing that you would want to do to your document is to tokenize it. Tokenization creates a “bag of words” that are contained in your document. This allows you to do further filtering on your document. Search for the “Tokenize” operator and drag it into the “Process Documents” process.
Connect the “doc” node of the process to the “doc” input node of the operator if it has not automatically connected already. Now we are ready to filter the bag of words. In “Filtering” folder under the “Text Processing” operator folder, you can see the various filtering methods that you can apply to your process. For this example, I want to filter certain words out of my document that don’t really have any meaning to the document itself (such as the words a, and, the, as, of, etc.); therefore, I will drag the “Filter Stopwords (English)” into my process because my document is in English. Also, I want to filter out any remaining words that are less than three characters. Select “Filter Tokens by Length” and set your parameters as desired (in this case, I want my min number of characters to be 3, and my max number of characters to be an arbitrarily large number since I don’t care about an upper bound). Connect the nodes of each subsequent operator accordingly as in the picture.

After I filtered the bag of words by stopwords and length, I want to transform all of my words to lowercase since the same word would be counted differently if it was in uppercase vs. lowercase. Select the operator “Transform Cases” and drag it into the process.

5) Now that I have the sufficient operators in my process for this example, I check all of my node connections and click the “Play” button to run my process. If all goes well, your output should look like this in the results view:

Congrats! You are now able to see a word list containing all the different words in your document and their occurrence count next to it in the “Total Occurences” column. If you do not get this output, make sure that all of your nodes are connected correctly and also to the right type. Some errors are because your output at one node does not match the type expected at the input of the next node of an operator. If you are still having trouble, please comment or check out the Rapid-i support forum.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s