Found a weird corner case on differentiating between 0 and False today. If the initial list contains the numpy version of False (numpy.bool_(False)), the is comparisons don't work, because numpy.bool_(False) is not False.
These arise all the time in comparisons that use numpy types. For example:
>>> type(numpy.array(50)<0)
<class 'numpy.bool_'>
The easiest way would be to compare using the numpy.bool_ type: (np.array(50)<0) is (np.False_). But doing that requires a numpy dependency. The solution I came up with was to do a string comparison (working as of numpy 1.18.1):
str(numpy.bool_(False)) == str(False)
So when dealing with a list, a la @kindall it would be:
all(str(x) != str(False) for x in a_list)
Note that this test also has a problem with the string 'False'. To avoid that, you could exclude against cases where the string representation was equivalent to itself (this also dodges a numpy string array). Here's some test outputs:
>>> foo = False
>>> str(foo) != foo and str(foo) == str(False)
True
>>> foo = numpy.bool_(False)
>>> str(foo) != foo and str(foo) == str(False)
True
>>> foo = 0
>>> str(foo) != foo and str(foo) == str(False)
False
>>> foo = 'False'
>>> str(foo) != foo and str(foo) == str(False)
False
>>> foo = numpy.array('False')
>>> str(foo) != foo and str(foo) == str(False)
array(False)
I am not really an expert programmer, so there may be some limitations I've still missed, or a big reason not to do this, but it allowed me to differentiate 0 and False without needing to resort to a numpy dependency.
Falseand0values that need to be treated differently. - chepner