Utilizing the Open Movie Database Api for Predicting the Review Class of Movies
How to Fix Pic Review Data for Sentiment Analysis (Text Classification)
Concluding Updated on December 21, 2020
Text information preparation is different for each problem.
Preparation starts with simple steps, like loading data, but chop-chop gets difficult with cleaning tasks that are very specific to the data yous are working with. You need help as to where to begin and what society to work through the steps from raw data to information ready for modeling.
In this tutorial, you will discover how to prepare motion-picture show review text data for sentiment assay, pace-by-step.
After completing this tutorial, you lot will know:
- How to load text data and clean it to remove punctuation and other not-words.
- How to develop a vocabulary, tailor it, and save information technology to file.
- How to prepare movie reviews using cleaning and a pre-defined vocabulary and save them to new files gear up for modeling.
Kick-start your project with my new volume Deep Learning for Natural Language Processing, including step-by-step tutorials and the Python source lawmaking files for all examples.
Let's get started.
- Update October/2017: Fixed a modest problems when skipping non-matching files, cheers Jan Zett.
- Update Dec/2017: Fixed a small typo in total example, thanks Ray and Zain.
- Update Aug/2020: Updated link to movie review dataset.
How to Gear up Moving-picture show Review Data for Sentiment Analysis
Photo past Kenneth Lu, some rights reserved.
Tutorial Overview
This tutorial is divided into five parts; they are:
- Flick Review Dataset
- Load Text Information
- Clean Text Data
- Develop Vocabulary
- Salvage Prepared Data
Need help with Deep Learning for Text Data?
Take my free 7-24-hour interval electronic mail crash course now (with code).
Click to sign-up and likewise get a free PDF Ebook version of the course.
1. Picture Review Dataset
The Movie Review Data is a collection of movie reviews retrieved from the imdb.com website in the early 2000s by Bo Pang and Lillian Lee. The reviews were collected and made bachelor as part of their research on tongue processing.
The reviews were originally released in 2002, but an updated and cleaned upwardly version was released in 2004, referred to as "v2.0".
The dataset is comprised of 1,000 positive and 1,000 negative motion picture reviews drawn from an archive of the rec.arts.movies.reviews newsgroup hosted at IMDB. The authors refer to this dataset every bit the "polarity dataset".
Our information contains 1000 positive and chiliad negative reviews all written before 2002, with a cap of 20 reviews per author (312 authors full) per category. We refer to this corpus every bit the polarity dataset.
— A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts, 2004.
The data has been cleaned up somewhat, for example:
- The dataset is comprised of only English reviews.
- All text has been converted to lowercase.
- There is white infinite around punctuation like periods, commas, and brackets.
- Text has been split into ane sentence per line.
The data has been used for a few related natural linguistic communication processing tasks. For nomenclature, the functioning of classical models (such as Support Vector Machines) on the data is in the range of high 70% to low lxxx% (e.g. 78%-to-82%).
More sophisticated information preparation may see results as high as 86% with x-fold cantankerous validation. This gives us a ballpark of depression-to-mid 80s if we were looking to use this dataset in experiments on modern methods.
… depending on choice of downstream polarity classifier, nosotros tin attain highly statistically meaning comeback (from 82.eight% to 86.iv%)
— A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts, 2004.
You can download the dataset from hither:
- Pic Review Polarity Dataset (review_polarity.tar.gz, 3MB)
Afterward unzipping the file, you will have a directory chosen "txt_sentoken" with ii sub-directories containing the text "neg" and "pos" for negative and positive reviews. Reviews are stored one per file with a naming convention cv000 to cv999 for each of neg and pos.
Next, let's look at loading the text data.
ii. Load Text Data
In this section, we will look at loading individual text files, then processing the directories of files.
We volition assume that the review data is downloaded and available in the current working directory in the folder "txt_sentoken".
Nosotros tin load an individual text file past opening information technology, reading in the ASCII text, and endmost the file. This is standard file handling stuff. For example, we tin can load the first negative review file "cv000_29416.txt" as follows:
# load one file filename = 'txt_sentoken/neg/cv000_29416.txt' # open up the file as read only file = open ( filename , 'r' ) # read all text text = file . read ( ) # close the file file . close ( ) |
This loads the document as ASCII and preserves any white space, like new lines.
We can turn this into a function called load_doc() that takes a filename of the document to load and returns the text.
# load doc into memory def load_doc ( filename ) : # open up the file as read only file = open ( filename , 'r' ) # read all text text = file . read ( ) # close the file file . close ( ) return text |
Nosotros take ii directories each with i,000 documents each. We can process each directory in turn by commencement getting a listing of files in the directory using the listdir() role, and then loading each file in turn.
For example, nosotros can load each document in the negative directory using the load_doc() function to do the actual loading.
1 2 3 iv 5 6 vii 8 9 10 11 12 13 14 15 sixteen 17 18 19 xx 21 22 23 24 | from os import listdir # load doc into memory def load_doc ( filename ) : # open up the file as read only file = open ( filename , 'r' ) # read all text text = file . read ( ) # close the file file . close ( ) return text # specify directory to load directory = 'txt_sentoken/neg' # walk through all files in the folder for filename in listdir ( directory ) : # skip files that do not take the right extension if not filename . endswith ( ".txt" ) : continue # create the full path of the file to open path = directory + '/' + filename # load document physician = load_doc ( path ) impress ( 'Loaded %due south' % filename ) |
Running this example prints the filename of each review after information technology is loaded.
... Loaded cv995_23113.txt Loaded cv996_12447.txt Loaded cv997_5152.txt Loaded cv998_15691.txt Loaded cv999_14636.txt |
Nosotros can turn the processing of the documents into a function also and employ it every bit a template later for developing a role to clean all documents in a folder. For example, below we ascertain a process_docs() function to do the same matter.
i 2 3 4 5 six vii eight 9 10 xi 12 thirteen 14 15 16 17 eighteen 19 20 21 22 23 24 25 26 27 28 | from os import listdir # load doc into memory def load_doc ( filename ) : # open the file as read only file = open up ( filename , 'r' ) # read all text text = file . read ( ) # close the file file . close ( ) return text # load all docs in a directory def process_docs ( directory ) : # walk through all files in the folder for filename in listdir ( directory ) : # skip files that do not have the correct extension if non filename . endswith ( ".txt" ) : continue # create the full path of the file to open up path = directory + '/' + filename # load document doc = load_doc ( path ) print ( 'Loaded %s' % filename ) # specify directory to load directory = 'txt_sentoken/neg' process_docs ( directory ) |
Now that nosotros know how to load the motion-picture show review text data, permit's look at cleaning it.
3. Make clean Text Data
In this section, we will wait at what information cleaning we might want to exercise to the movie review information.
Nosotros will presume that we will be using a bag-of-words model or mayhap a word embedding that does non require too much grooming.
Split into Tokens
First, let'south load one document and wait at the raw tokens divide by white infinite. We volition apply the load_doc() office adult in the previous section. We can use the split() function to split the loaded document into tokens separated by white infinite.
# load doc into memory def load_doc ( filename ) : # open the file as read only file = open up ( filename , 'r' ) # read all text text = file . read ( ) # close the file file . close ( ) return text # load the certificate filename = 'txt_sentoken/neg/cv000_29416.txt' text = load_doc ( filename ) # split into tokens by white space tokens = text . split ( ) print ( tokens ) |
Running the example gives a dainty long list of raw tokens from the document.
... 'years', 'agone', 'and', 'has', 'been', 'sitting', 'on', 'the', 'shelves', 'e'er', 'since', '.', 'whatsoever', '.', '.', '.', 'skip', 'it', '!', "where's", 'joblo', 'coming', 'from', '?', 'a', 'nightmare', 'of', 'elm', 'street', '3', '(', '7/ten', ')', '-', 'blair', 'witch', 'ii', '(', '7/10', ')', '-', 'the', 'crow', '(', '9/10', ')', '-', 'the', 'crow', ':', 'conservancy', '(', 'four/ten', ')', '-', 'lost', 'highway', '(', 'x/ten', ')', '-', 'memento', '(', '10/10', ')', '-', 'the', 'others', '(', '9/10', ')', '-', 'stir', 'of', 'echoes', '(', '8/10', ')'] |
Just looking at the raw tokens can give us a lot of ideas of things to try, such as:
- Remove punctuation from words (e.g. 'what's').
- Removing tokens that are only punctuation (e.g. '-').
- Removing tokens that contain numbers (e.g. '10/x′).
- Remove tokens that have i character (east.g. 'a').
- Remove tokens that don't have much pregnant (e.chiliad. 'and')
Some ideas:
- We tin filter out punctuation from tokens using the string translate() function.
- We can remove tokens that are just punctuation or contain numbers by using an isalpha() cheque on each token.
- We can remove English stop words using the list loaded using NLTK.
- Nosotros tin can filter out brusk tokens by checking their length.
Below is an updated version of cleaning this review.
1 2 iii iv 5 6 vii 8 9 ten 11 12 13 14 15 sixteen 17 18 19 xx 21 22 23 24 25 26 27 28 29 | from nltk . corpus import stopwords import cord # load doc into retention def load_doc ( filename ) : # open the file as read only file = open ( filename , 'r' ) # read all text text = file . read ( ) # close the file file . shut ( ) render text # load the document filename = 'txt_sentoken/neg/cv000_29416.txt' text = load_doc ( filename ) # split into tokens by white infinite tokens = text . dissever ( ) # remove punctuation from each token table = str . maketrans ( '' , '' , string . punctuation ) tokens = [ w . interpret ( table ) for w in tokens ] # remove remaining tokens that are non alphabetic tokens = [ word for word in tokens if word . isalpha ( ) ] # filter out stop words stop_words = set ( stopwords . words ( 'english' ) ) tokens = [ w for west in tokens if not w in stop_words ] # filter out brusk tokens tokens = [ word for discussion in tokens if len ( word ) > 1 ] impress ( tokens ) |
Running the example gives a much cleaner looking list of tokens
... 'explanation', 'craziness', 'came', 'oh', 'style', 'horror', 'teen', 'slasher', 'flick', 'packaged', 'look', 'manner', 'someone', 'apparently', 'assuming', 'genre', 'still', 'hot', 'kids', 'likewise', 'wrapped', 'production', 'two', 'years', 'agone', 'sitting', 'shelves', 'ever', 'since', 'any', 'skip', 'wheres', 'joblo', 'coming', 'nightmare', 'elm', 'street', 'blair', 'witch', 'crow', 'crow', 'salvation', 'lost', 'highway', 'memento', 'others', 'stir', 'echoes'] |
We tin put this into a function called clean_doc() and exam it on some other review, this time a positive review.
1 2 three 4 5 half dozen 7 8 9 10 eleven 12 13 fourteen 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | from nltk . corpus import stopwords import string # load doc into retentiveness def load_doc ( filename ) : # open the file as read simply file = open ( filename , 'r' ) # read all text text = file . read ( ) # close the file file . close ( ) render text # turn a doc into clean tokens def clean_doc ( doc ) : # split into tokens past white space tokens = doc . separate ( ) # remove punctuation from each token tabular array = str . maketrans ( '' , '' , cord . punctuation ) tokens = [ w . translate ( tabular array ) for w in tokens ] # remove remaining tokens that are not alphabetic tokens = [ word for word in tokens if word . isalpha ( ) ] # filter out terminate words stop_words = prepare ( stopwords . words ( 'english' ) ) tokens = [ w for w in tokens if not due west in stop_words ] # filter out short tokens tokens = [ word for give-and-take in tokens if len ( give-and-take ) > 1 ] return tokens # load the document filename = 'txt_sentoken/pos/cv000_29590.txt' text = load_doc ( filename ) tokens = clean_doc ( text ) print ( tokens ) |
Again, the cleaning procedure seems to produce a good set of tokens, at least equally a start cut.
... 'comic', 'oscar', 'winner', 'martin', 'childs', 'shakespeare', 'love', 'production', 'design', 'turns', 'original', 'prague', 'surroundings', 'one', 'creepy', 'place', 'even', 'acting', 'hell', 'solid', 'dreamy', 'depp', 'turning', 'typically', 'strong', 'performance', 'deftly', 'handling', 'british', 'accent', 'ians', 'holm', 'joe', 'goulds', 'hush-hush', 'richardson', 'dalmatians', 'log', 'cracking', 'supporting', 'roles', 'large', 'surprise', 'graham', 'cringed', 'starting time', 'time', 'opened', 'mouth', 'imagining', 'try', 'irish', 'emphasis', 'actually', 'wasnt', 'half', 'bad', 'motion picture', 'still', 'good', 'strong', 'violencegore', 'sexuality', 'language', 'drug', 'content'] |
There are many more cleaning steps nosotros could take and I get out them to your imagination.
Adjacent, let's expect at how nosotros can manage a preferred vocabulary of tokens.
4. Develop Vocabulary
When working with predictive models of text, like a pocketbook-of-words model, there is a pressure to reduce the size of the vocabulary.
The larger the vocabulary, the more than sparse the representation of each discussion or document.
A part of preparing text for sentiment analysis involves defining and tailoring the vocabulary of words supported past the model.
Nosotros can do this past loading all of the documents in the dataset and building a set of words. We may decide to support all of these words, or perhaps discard some. The final called vocabulary can then be saved to file for after employ, such as filtering words in new documents in the future.
We can continue rail of the vocabulary in a Counter, which is a dictionary of words and their count with some additional convenience functions.
We need to develop a new function to procedure a document and add it to the vocabulary. The function needs to load a document past calling the previously developed load_doc() function. Information technology needs to make clean the loaded document using the previously developed clean_doc() function, then it needs to add all the tokens to the Counter, and update counts. We tin can do this last step by calling the update() office on the counter object.
Beneath is a part called add_doc_to_vocab() that takes equally arguments a document filename and a Counter vocabulary.
# load doc and add to vocab def add_doc_to_vocab ( filename , vocab ) : # load md doc = load_doc ( filename ) # clean doc tokens = clean_doc ( doc ) # update counts vocab . update ( tokens ) |
Finally, we tin can use our template in a higher place for processing all documents in a directory chosen process_docs() and update it to call add_doc_to_vocab().
# load all docs in a directory def process_docs ( directory , vocab ) : # walk through all files in the folder for filename in listdir ( directory ) : # skip files that do not have the right extension if not filename . endswith ( ".txt" ) : continue # create the total path of the file to open path = directory + '/' + filename # add doc to vocab add_doc_to_vocab ( path , vocab ) |
We can put all of this together and develop a full vocabulary from all documents in the dataset.
1 2 iii 4 5 half dozen 7 8 ix x xi 12 xiii 14 15 sixteen 17 eighteen xix 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 threescore 61 | from cord import punctuation from bone import listdir from collections import Counter from nltk . corpus import stopwords # load doctor into retentiveness def load_doc ( filename ) : # open the file as read but file = open ( filename , 'r' ) # read all text text = file . read ( ) # close the file file . shut ( ) return text # plow a doc into make clean tokens def clean_doc ( doc ) : # split into tokens by white space tokens = medico . split ( ) # remove punctuation from each token table = str . maketrans ( '' , '' , punctuation ) tokens = [ w . interpret ( table ) for w in tokens ] # remove remaining tokens that are not alphabetic tokens = [ word for word in tokens if word . isalpha ( ) ] # filter out stop words stop_words = set ( stopwords . words ( 'english' ) ) tokens = [ w for w in tokens if not w in stop_words ] # filter out short tokens tokens = [ word for word in tokens if len ( give-and-take ) > one ] render tokens # load doc and add to vocab def add_doc_to_vocab ( filename , vocab ) : # load doc doc = load_doc ( filename ) # clean doctor tokens = clean_doc ( doc ) # update counts vocab . update ( tokens ) # load all docs in a directory def process_docs ( directory , vocab ) : # walk through all files in the folder for filename in listdir ( directory ) : # skip files that do not have the right extension if not filename . endswith ( ".txt" ) : continue # create the full path of the file to open path = directory + '/' + filename # add dr. to vocab add_doc_to_vocab ( path , vocab ) # define vocab vocab = Counter ( ) # add together all docs to vocab process_docs ( 'txt_sentoken/neg' , vocab ) process_docs ( 'txt_sentoken/pos' , vocab ) # print the size of the vocab print ( len ( vocab ) ) # impress the top words in the vocab impress ( vocab . most_common ( 50 ) ) |
Running the example creates a vocabulary with all documents in the dataset, including positive and negative reviews.
We can come across that there are a picayune over 46,000 unique words beyond all reviews and the top 3 words are 'film', 'one', and 'motion picture'.
46557 [('film', 8860), ('one', 5521), ('film', 5440), ('similar', 3553), ('even', 2555), ('good', 2320), ('time', 2283), ('story', 2118), ('films', 2102), ('would', 2042), ('much', 2024), ('also', 1965), ('characters', 1947), ('become', 1921), ('graphic symbol', 1906), ('two', 1825), ('get-go', 1768), ('come across', 1730), ('well', 1694), ('fashion', 1668), ('make', 1590), ('really', 1563), ('trivial', 1491), ('life', 1472), ('plot', 1451), ('people', 1420), ('movies', 1416), ('could', 1395), ('bad', 1374), ('scene', 1373), ('never', 1364), ('all-time', 1301), ('new', 1277), ('many', 1268), ('doesnt', 1267), ('human being', 1266), ('scenes', 1265), ('dont', 1210), ('know', 1207), ('hes', 1150), ('great', 1141), ('another', 1111), ('love', 1089), ('activeness', 1078), ('go', 1075), ('us', 1065), ('director', 1056), ('something', 1048), ('cease', 1047), ('still', 1038)] |
Perhaps the least common words, those that only appear once beyond all reviews, are not predictive. Mayhap some of the most common words are not useful too.
These are good questions and really should be tested with a specific predictive model.
Generally, words that simply appear once or a few times across two,000 reviews are probably not predictive and can be removed from the vocabulary, greatly cutting down on the tokens we need to model.
We tin can do this by stepping through words and their counts and only keeping those with a count above a called threshold. Hither we will use v occurrences.
# keep tokens with > v occurrence min_occurane = 5 tokens = [ grand for thou , c in vocab . items ( ) if c >= min_occurane ] impress ( len ( tokens ) ) |
This reduces the vocabulary from 46,557 to 14,803 words, a huge drop. Perchance a minimum of 5 occurrences is likewise aggressive; you lot tin experiment with different values.
We can then save the chosen vocabulary of words to a new file. I like to save the vocabulary as ASCII with one word per line.
Beneath defines a function chosen save_list() to save a list of items, in this example, tokens to file, one per line.
def save_list ( lines , filename ) : data = '\n' . bring together ( lines ) file = open ( filename , 'westward' ) file . write ( data ) file . shut ( ) |
The complete example for defining and saving the vocabulary is listed below.
1 2 3 4 5 vi 7 8 9 x 11 12 xiii 14 15 16 17 eighteen xix 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 xl 41 42 43 44 45 46 47 48 49 fifty 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 | from string import punctuation from bone import listdir from collections import Counter from nltk . corpus import stopwords # load doc into memory def load_doc ( filename ) : # open up the file as read simply file = open ( filename , 'r' ) # read all text text = file . read ( ) # close the file file . shut ( ) render text # turn a doc into clean tokens def clean_doc ( medico ) : # split into tokens by white space tokens = doc . divide ( ) # remove punctuation from each token tabular array = str . maketrans ( '' , '' , punctuation ) tokens = [ west . translate ( table ) for w in tokens ] # remove remaining tokens that are non alphabetic tokens = [ word for word in tokens if word . isalpha ( ) ] # filter out terminate words stop_words = set ( stopwords . words ( 'english language' ) ) tokens = [ w for w in tokens if not westward in stop_words ] # filter out short tokens tokens = [ discussion for word in tokens if len ( word ) > 1 ] return tokens # load doc and add to vocab def add_doc_to_vocab ( filename , vocab ) : # load dr. md = load_doc ( filename ) # clean dr. tokens = clean_doc ( doctor ) # update counts vocab . update ( tokens ) # load all docs in a directory def process_docs ( directory , vocab ) : # walk through all files in the binder for filename in listdir ( directory ) : # skip files that do non have the right extension if not filename . endswith ( ".txt" ) : go on # create the full path of the file to open path = directory + '/' + filename # add doc to vocab add_doc_to_vocab ( path , vocab ) # relieve list to file def save_list ( lines , filename ) : data = '\n' . join ( lines ) file = open up ( filename , 'w' ) file . write ( data ) file . close ( ) # ascertain vocab vocab = Counter ( ) # add all docs to vocab process_docs ( 'txt_sentoken/neg' , vocab ) process_docs ( 'txt_sentoken/pos' , vocab ) # print the size of the vocab print ( len ( vocab ) ) # print the peak words in the vocab impress ( vocab . most_common ( l ) ) # go on tokens with > 5 occurrence min_occurane = five tokens = [ g for k , c in vocab . items ( ) if c >= min_occurane ] impress ( len ( tokens ) ) # save tokens to a vocabulary file save_list ( tokens , 'vocab.txt' ) |
Running this concluding snippet subsequently creating the vocabulary will salvage the chosen words to file.
It is a skillful idea to take a look at, and even study, your called vocabulary in order to get ideas for improve preparing this data, or text information in the future.
hasnt updating figuratively symphony civilians might fisherman hokum witch buffoons ... |
Next, we can expect at using the vocabulary to create a prepared version of the motion-picture show review dataset.
5. Save Prepared Information
We can use the data cleaning and chosen vocabulary to prepare each movie review and save the prepared versions of the reviews ready for modeling.
This is a proficient practise as it decouples the data preparation from modeling, allowing you lot to focus on modeling and circle dorsum to data prep if you take new ideas.
We can kickoff off by loading the vocabulary from 'vocab.txt'.
# load doctor into memory def load_doc ( filename ) : # open the file as read only file = open ( filename , 'r' ) # read all text text = file . read ( ) # close the file file . close ( ) render text # load vocabulary vocab_filename = 'review_polarity/vocab.txt' vocab = load_doc ( vocab_filename ) vocab = vocab . split ( ) vocab = gear up ( vocab ) |
Next, we tin can clean the reviews, apply the loaded vocab to filter out unwanted tokens, and save the clean reviews in a new file.
One approach could exist to save all the positive reviews in one file and all the negative reviews in another file, with the filtered tokens separated by white infinite for each review on separate lines.
Offset, we can define a function to procedure a document, make clean information technology, filter it, and render it as a single line that could be saved in a file. Below defines the doc_to_line() function to do but that, taking a filename and vocabulary (every bit a gear up) as arguments.
It calls the previously defined load_doc() part to load the document and clean_doc() to tokenize the document.
# load doc, clean and render line of tokens def doc_to_line ( filename , vocab ) : # load the doc doc = load_doc ( filename ) # clean doc tokens = clean_doc ( doc ) # filter by vocab tokens = [ w for w in tokens if due west in vocab ] return ' ' . join ( tokens ) |
Next, we can define a new version of process_docs() to step through all reviews in a folder and convert them to lines past calling doc_to_line() for each document. A list of lines is and so returned.
# load all docs in a directory def process_docs ( directory , vocab ) : lines = list ( ) # walk through all files in the folder for filename in listdir ( directory ) : # skip files that do not have the correct extension if not filename . endswith ( ".txt" ) : continue # create the full path of the file to open path = directory + '/' + filename # load and clean the md line = doc_to_line ( path , vocab ) # add together to list lines . append ( line ) return lines |
We can then phone call process_docs() for both the directories of positive and negative reviews, then phone call save_list() from the previous section to save each list of processed reviews to a file.
The consummate code listing is provided below.
one 2 3 4 v 6 seven eight 9 10 xi 12 13 14 15 16 17 xviii xix 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 xl 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 sixty 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | from cord import punctuation from bone import listdir from collections import Counter from nltk . corpus import stopwords # load doc into retention def load_doc ( filename ) : # open the file as read simply file = open ( filename , 'r' ) # read all text text = file . read ( ) # close the file file . shut ( ) return text # turn a medico into clean tokens def clean_doc ( md ) : # divide into tokens by white infinite tokens = doc . split ( ) # remove punctuation from each token tabular array = str . maketrans ( '' , '' , punctuation ) tokens = [ due west . interpret ( table ) for w in tokens ] # remove remaining tokens that are non alphabetic tokens = [ word for word in tokens if word . isalpha ( ) ] # filter out stop words stop_words = set ( stopwords . words ( 'english language' ) ) tokens = [ westward for w in tokens if not w in stop_words ] # filter out short tokens tokens = [ word for word in tokens if len ( discussion ) > 1 ] return tokens # salvage list to file def save_list ( lines , filename ) : data = '\n' . bring together ( lines ) file = open ( filename , 'w' ) file . write ( information ) file . close ( ) # load medico, clean and return line of tokens def doc_to_line ( filename , vocab ) : # load the md dr. = load_doc ( filename ) # clean doc tokens = clean_doc ( doc ) # filter by vocab tokens = [ due west for westward in tokens if due west in vocab ] return ' ' . join ( tokens ) # load all docs in a directory def process_docs ( directory , vocab ) : lines = listing ( ) # walk through all files in the folder for filename in listdir ( directory ) : # skip files that practise not take the correct extension if not filename . endswith ( ".txt" ) : go on # create the full path of the file to open up path = directory + '/' + filename # load and make clean the doctor line = doc_to_line ( path , vocab ) # add together to list lines . append ( line ) return lines # load vocabulary vocab_filename = 'vocab.txt' vocab = load_doc ( vocab_filename ) vocab = vocab . divide ( ) vocab = fix ( vocab ) # prepare negative reviews negative_lines = process_docs ( 'txt_sentoken/neg' , vocab ) save_list ( negative_lines , 'negative.txt' ) # prepare positive reviews positive_lines = process_docs ( 'txt_sentoken/pos' , vocab ) save_list ( positive_lines , 'positive.txt' ) |
Running the example saves ii new files, 'negative.txt' and 'positive.txt', that contain the prepared negative and positive reviews respectively.
The data is ready for use in a handbag-of-words or even word embedding model.
Extensions
This section lists some extensions that yous may wish to explore.
- Stemming. We could reduce each word in documents to their stem using a stemming algorithm similar the Porter stemmer.
- N-Grams. Instead of working with private words, nosotros could work with a vocabulary of word pairs, called bigrams. Nosotros could also investigate the use of larger groups, such equally triplets (trigrams) and more than (n-grams).
- Encode Words. Instead of saving tokens as-is, we could save the integer encoding of the words, where the index of the word in the vocabulary represents a unique integer number for the word. This will brand information technology easier to work with the data when modeling.
- Encode Documents. Instead of saving tokens in documents, we could encode the documents using a purse-of-words model and encode each word every bit a boolean present/absent-minded flag or employ more sophisticated scoring, such as TF-IDF.
If yous try whatsoever of these extensions, I'd dearest to know.
Share your results in the comments below.
Farther Reading
This department provides more resources on the topic if you lot are looking go deeper.
Papers
- A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts, 2004.
APIs
- nltk.tokenize package API
- Chapter two, Accessing Text Corpora and Lexical Resources
- os API Miscellaneous operating system interfaces
- collections API – Container datatypes
Summary
In this tutorial, you discovered how to ready movie review text data for sentiment analysis, step-by-stride.
Specifically, you learned:
- How to load text data and clean it to remove punctuation and other non-words.
- How to develop a vocabulary, tailor information technology, and save information technology to file.
- How to prepare moving-picture show reviews using cleaning and a predefined vocabulary and salve them to new files set for modeling.
Exercise you have whatever questions?
Ask your questions in the comments beneath and I will do my all-time to answer.
whitehousenoung1987.blogspot.com
Source: https://machinelearningmastery.com/prepare-movie-review-data-sentiment-analysis/
Postar um comentário for "Utilizing the Open Movie Database Api for Predicting the Review Class of Movies"