1
votes

I have a dataset that looks like

      A         B         C         D       sex        weight
  0.955136  0.802256  0.317182 -0.708615  female       normal
  0.463615 -0.860053 -0.136408 -0.892888    male        obese
 -0.855532 -0.181905 -1.175605  1.396793  female   overweight
 -1.236216 -1.329982  0.531241  2.064822    male  underweight
 -0.970420 -0.481791 -0.995313  0.672131    male        obese

I would like, given the features X= [A,B,C,D], and the labels y=[sex, weight] , to train a machine learning model that could be able to predict both the sex and the weight of a person given the features A,B,C and D. How can this be achieved? Could you please suggest any library or reading materials that would help me to achieve this? For easier testing, the dataset can be artificially generated using the following code:

import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(100, 4), columns=list('ABCD'))
df['sex']  = [np.random.choice(['male', 'female']) for x in range(len(df))]
df['weight'] = [np.random.choice(['underweight', 
        'normal', 'overweight', 'obese']) for x in range(len(df)) ]
1
This is a multi-output multi-class task, not multi-label. There are subtle differences in them. You can either, train individual models for each y (one model for sex, other for weight, as the answer below does) or use classifiers which support this type of task. See "Support multiclass-multioutput" here. - Vivek Kumar

1 Answers

0
votes

You need fixed labels from string value to integer:

import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(100, 4), columns=list('ABCD'))
#fixed labels
df['sex']  = [np.random.choice(['0', '1']) for x in range(len(df))]
df['weight'] = [np.random.choice(list(range(4))) for x in range(len(df))]

% matplotlib inline
from pandas import read_csv, DataFrame
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.svm import SVR
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import r2_score
from sklearn.cross_validation import train_test_split
import matplotlib.pyplot as plt
trg = df[['sex','weight']]
trn = df.drop(['sex','weight'], axis=1)
#list of different models
models = [LinearRegression(),
          RandomForestRegressor(n_estimators=100, max_features ='sqrt'),
          SVR(kernel='linear'),
          LogisticRegression()
          ]

Xtrn, Xtest, Ytrn, Ytest = train_test_split(trn, trg, test_size=0.4)
TestModels = DataFrame()
tmp = {}
#for each model in list
for model in models:
    #get name
    m = str(model)
    tmp['Model'] = m[:m.index('(')]    
    #for each columns from result list
    for i in range(Ytrn.shape[1]):
        #learning model
        model.fit(Xtrn, Ytrn.iloc[:,i]) 
        #calculate coefficient of determination
        tmp['R2_Y%s'%str(i+1)] = r2_score(Ytest.iloc[:,0], model.predict(Xtest))
    #write data and final datarame
    TestModels = TestModels.append([tmp])
#make an index by model name
TestModels.set_index('Model', inplace=True)

fig, axes = plt.subplots(ncols=2, figsize=(10,4))
TestModels.R2_Y1.plot(ax=axes[0], kind='bar', title='R2_Y1')
TestModels.R2_Y2.plot(ax=axes[1], kind='bar', color='green', title='R2_Y2')