2
votes

I'm getting "weird" results using scikit-learn's Tfidf transformer. Normally, I would expect a word, that occurs in all documents in a corpus to have an idf equal to 0 (using no sort of smoothing or normalization), as the formular I would use would be the logarithm of the number of document in the corpus divided by the number of documents containing the term. Apparently (as illustrated below) scikit-learn's implementation adds one to each idf value compared to my manual implementation. Does anybody know why? Again, notice that I have set smoothing and normalization equal to None/False.

In [101]: from sklearn.feature_extraction.text import TfidfTransformer

In [102]: counts
Out[102]: 
array([[3, 0, 1],
       [2, 0, 0],
       [3, 0, 0],
       [4, 0, 0],
       [3, 2, 0],
       [3, 0, 2]])

In [103]: transformer = TfidfTransformer(norm=None, smooth_idf=False)

In [104]: transformer
Out[104]: 
TfidfTransformer(norm=None, smooth_idf=False, sublinear_tf=False,
         use_idf=True)

In [105]: tfidf = transformer.fit_transform(counts)

In [106]: tfidf.toarray()
Out[106]: 
array([[ 3.        ,  0.        ,  2.09861229],
       [ 2.        ,  0.        ,  0.        ],
       [ 3.        ,  0.        ,  0.        ],
       [ 4.        ,  0.        ,  0.        ],
       [ 3.        ,  5.58351894,  0.        ],
       [ 3.        ,  0.        ,  4.19722458]])

In [107]: transformer.idf_
Out[107]: array([ 1.        ,  2.79175947,  2.09861229])

In [108]: idf1 = np.log(6/6)

In [109]: idf1
Out[109]: 0.0

In [110]: idf2 = np.log(6/1)

In [111]: idf2
Out[111]: 1.791759469228055

In [112]: idf3 = np.log(6/2)

In [113]: idf3
Out[113]: 1.0986122886681098

I have been unable to find any source that justifies adding one to the idf values. I'm using scikit-learn version '0.14.1'.

Btw another solution than scikit-learn is not really useful to me, as I need to build a scikit-learn pipeline for gridsearch.

1

1 Answers

6
votes

This is not a bug, its a feature

# log1p instead of log makes sure terms with zero idf don't get
# suppressed entirely
idf = np.log(float(n_samples) / df) + 1.0

This +1 (as mentioned in the comment) is used to make idf normalizator weaker, otherwise, elements which occur in all the documents are completely removed (they have idf=0 so whole tfidf=0)