jeudi 9 avril 2015

Why is "if not (a and b)" faster than "if not a or not b"?

On a whim, I recently tested these two methods with timeit, to see which evaluation method was faster:



import timeit

"""Test method returns True if either argument is falsey, else False."""

def and_chk((a, b)):
if not (a and b):
return True
return False

def not_or_chk((a, b)):
if not a or not b:
return True
return False


...and got these results:



VALUES FOR A,B -> 0,0 0,1 1,0 1,1
method
and_chk(a,b) 0.95559 0.98646 0.95138 0.98788
not_or_chk(a,b) 0.96804 1.07323 0.96015 1.05874
...seconds per 1,111,111 cycles.


The difference in efficiency is between one and nine percent, always in favour of if not (a and b), which is the opposite of what I might expect since I understand that if not a or not b will evaluate its terms (if not a and then if not b) in order, running the if block once it encounters a true expression (and there are no and clauses). In contrast, the and_chk method needs to evaluate both clauses before it can return any result to the if not.. that wraps it.


The timing results, however, disprove this understanding. How, then, is the if condition being evaluated? I am perfectly aware of the fact that this degree of microoptimization is practically, if not completely, pointless. I just want to understand how Python is going about it.




For completion's sake, this is how I set up timeit...



cyc = 1111111

bothFalse_and = iter([(0,0)] * cyc)
zeroTrue_and = iter([(1,0)] * cyc)
oneTrue_and = iter([(0,1)] * cyc)
bothTrue_and = iter([(1,1)] * cyc)

bothFalse_notor = iter([(0,0)] * cyc)
zeroTrue_notor = iter([(1,0)] * cyc)
oneTrue_notor = iter([(0,1)] * cyc)
bothTrue_notor = iter([(1,1)] * cyc)

time_bothFalse_and = timeit.Timer('and_chk(next(tups))', 'from __main__ import bothFalse_and as tups, and_chk')
time_zeroTrue_and = timeit.Timer('and_chk(next(tups))', 'from __main__ import zeroTrue_and as tups, and_chk')
time_oneTrue_and = timeit.Timer('and_chk(next(tups))', 'from __main__ import oneTrue_and as tups, and_chk')
time_bothTrue_and = timeit.Timer('and_chk(next(tups))', 'from __main__ import bothTrue_and as tups, and_chk')

time_bothFalse_notor = timeit.Timer('not_or_chk(next(tups))', 'from __main__ import bothFalse_notor as tups, not_or_chk')
time_zeroTrue_notor = timeit.Timer('not_or_chk(next(tups))', 'from __main__ import zeroTrue_notor as tups, not_or_chk')
time_oneTrue_notor = timeit.Timer('not_or_chk(next(tups))', 'from __main__ import oneTrue_notor as tups, not_or_chk')
time_bothTrue_notor = timeit.Timer('not_or_chk(next(tups))', 'from __main__ import bothTrue_notor as tups, not_or_chk')


...then ran each timeit.Timer(..) function with .timeit(cyc) to get the results posted.


Aucun commentaire:

Enregistrer un commentaire