As part of evaluating a model's metrics, I would like to use cross_val_score in sklearn to generate negative predictive value for the model.
In example below, I set the 'scoring' parameter within cross_val_score to calculate and print 'precision' (mean and standard deviations from 10-fold cross-validation) for positive predictive value of the model:
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
log=LogisticRegression()
log_prec = cross_val_score(log, x, y, cv=10, scoring='precision')
print("PPV(mean, std): ", np.round(log_prec.mean(), 2), np.round(log_prec.std(), 2))
How can I use something like the above line of code to generate negative predictive value/NPV (likelihood of a predicted negative to be a true negative) from within the cross_val_score method?
sklearn provides many scoring options (eg: roc_auc, recall, accuracy, F1, etc) but unforunately not one for NPV...