dcdevaluation is a library that optimizes the model evaluation process. It was built based on the necessities of the data science team of a company named Decode, based on São Paulo - Brazil. As it is in it's early stages, for now, it only supports classification models.
pip install dcdevaluationfrom dcdevaluation import EvaluatorsTo use the "Evaluators" class, instanciate it into a python object passing a list of probabilities (will be outputed by your model) and the true value of your data base (target feature used to train your model)
train_dataset = Evaluators(predicted_y, true_y)Attributes scores for all supported metrics(see above) to your select "dataset"
train_dataset.evaluate()This method returns:
train_dataset.ks
train_dataset.auc
train_dataset.f1
train_dataset.precision
train_dataset.recall
train_dataset.accuracyCreates a pandas dataframe with all supported metrics
train_dataset.to_table()This method returns:
# DataFrame with all supported metrics
train_dataset.metric_df
# Transposed DataFrame
train_dataset.t_metric_dfCreates a graph showing the good or bad rate of your model
Has the attribute bins, which allows the user to change the desired number of splits (default = 10)
train_dataset.split_rate_graph()Shows precision,recall and F1 score for 20 different cutting points.
Also has the option to select a range of cutting points (default: min = 0, max = 20)
train_dataset.find_cut()Creates as graph showing de ROC curve and it's comparison to "the coin".
train_dataset.ROC_auc()