I'm creating an optimization function with PuLP to optimize a portfolio. I have a list of assets that each have one of the following Liquidity Levels [1, 2, 3].
I am attempting to create a constraint that at least 20% of the portfolio needs to have Level 1 liquidity.
I have created a dictionary (liquidity) that has saved the {'Asset':Level} and am attempting to create the constraint with this and the asset variable (asset_vars), which is a LpVariable.
asset_vars = pl.LpVariable.dicts("Assets", asset_list,lowBound=0, upBound=1, cat='Integer')
prob += pl.lpSum([liquidity[f] * asset_vars[f] for f in asset_vars]) >= ?
I know my setup now is not close to what I would need to do, however, I am having difficulty finding / coming up with a solution.