Python multiprocessing speed -


i wrote bit of code test out python's multiprocessing on computer:

from multiprocessing import pool  var = range(5000000) def test_func(i):     return i+1  if __name__ == '__main__':     p = pool()     var = p.map(test_func, var) 

i timed using unix's time command , results were:

real 0m2.914s user 0m4.705s sys  0m1.406s 

then, using same var , test_func() timed:

var = map(test_func, var) 

and results were

real 0m1.785s user 0m1.548s sys  0m0.214s 

shouldn't multiprocessing code faster plain old map?

why should.

in map function, calling function sequentially.

multiprocessing pool creates set of workers task mapped. coordinating multiple worker processes run these functions.

try doing significant work inside function , time them , see if multiprocessing helps compute faster.

you have understand there overheads in using multiprocessing. when computing effort greater these overheads see it's benefits.

see last example in excellent introduction hellmann: http://www.doughellmann.com/pymotw/multiprocessing/communication.html

pool_size = multiprocessing.cpu_count() * 2 pool = multiprocessing.pool(processes=pool_size,                             initializer=start_process,                             maxtasksperchild=2,                             ) pool_outputs = pool.map(do_calculation, inputs) 

you create pools depending on cores have.


Comments

Popular posts from this blog

c# - SVN Error : "svnadmin: E205000: Too many arguments" -

c# - Copy ObservableCollection to another ObservableCollection -

All overlapping substrings matching a java regex -