Na-Rae Han (naraehan@pitt.edu), 2/17/2017, Pitt Library Workshop
Jupyter tips:
More on https://www.cheatography.com/weidadeyue/cheat-sheets/jupyter-notebook/
print("hello, world!")
greet
is a variable name assigned to a string value; note the absence of quotation marks.greet = "Hello, world!"
greet + " I come in peace."
greet.upper()
len()
returns the length of a string in the # of characters. len(greet)
+
, -
, *
and /
with numbers. num1 = 5678
num2 = 3.141592
result = num1 / num2
print(result)
[ ]
, with elements separated with commas. Lists can have strings, numbers, and more. len()
to get the size of a list. in
to see if an element is in a list. li = ['red', 'blue', 'green', 'black', 'white', 'pink']
len(li)
'blue' in li
for x in li :
print(x, len(x))
.upper()
, len()
, +'ish'
[x for x in li if x.endswith('e')]
[x+'ish' for x in li]
[len(x) for x in li]
di = {'Homer':35, 'Marge':35, 'Bart':10, 'Lisa':8}
di['Bart']
len(di)
NLTK is an external module; you can start using it after importing it.
nltk.word_tokenize()
is a handy tokenizing function out of literally tons of functions it provides.
It turns a text (a single string) into a list tokenized words.
import nltk
nltk.word_tokenize(greet)
sent = "You haven't seen Star Wars...?"
nltk.word_tokenize(sent)
nltk.FreqDist()
is is another useful NLTK function. sent = 'Rose is a rose is a rose is a rose.'
toks = nltk.word_tokenize(sent)
print(toks)
freq = nltk.FreqDist(toks)
freq
freq.most_common(3)
freq['rose']
len(freq)
open(filename).read()
reads in the content of a text file as a single string. myfile = 'C:/Users/narae/Desktop/inaugural/1789-Washington.txt' # Mac users should leave out C:
wtxt = open(myfile).read()
print(wtxt)
len(wtxt) # Number of characters in text
'fellow citizens' in wtxt
nltk.word_tokenize(wtxt)
wtokens = nltk.word_tokenize(wtxt)
len(wtokens) # Number of words in text
wfreq = nltk.FreqDist(wtokens)
wfreq['citizens']
len(wfreq) # Number of unique words in text
wfreq.most_common(40) # 40 most common words
sentcount = wfreq['.'] + wfreq['?'] + wfreq['!'] # Assuming every sentence ends with ., ! or ?
sentcount
len(wtokens)/sentcount # Average sentence length in number of words
[w for w in wfreq if len(w) >= 13] # all 13+ character words
long = [w for w in wfreq if len(w) >= 13]
for w in long :
print(w, len(w), wfreq[w]) # long words tend to be less frequent
Take a Python course!