python - Why does iterative elementwise array multiplication slow down in numpy? -


the code below reproduces problem have encountered in algorithm i'm implementing:

import numpy.random rand import time  x = rand.normal(size=(300,50000)) y = rand.normal(size=(300,50000))  in range(1000):     t0 = time.time()     y *= x     print "%.4f" % (time.time()-t0)     y /= y.max() #to prevent overflows 

the problem after number of iterations, things start gradually slower until 1 iteration takes multiple times more time initially.

a plot of slowdown enter image description here

cpu usage python process stable around 17-18% whole time.

i'm using:

  • python 2.7.4 32-bit version;
  • numpy 1.7.1 mkl;
  • windows 8.

as @alok pointed out, seems caused denormal numbers affecting performance. ran on osx system , confirmed issue. don't know of way flush denormals 0 in numpy. try around issue in algorithm avoiding small numbers: need dividing y until gets down 1.e-324 level?

if avoid low numbers e.g. adding following line in loop:

y += 1e-100 

then you'll have constant time per iteration (albeit slower because of operation). workaround use higher precision arithmetics, e.g.

x = rand.normal(size=(300,50000)).astype('longdouble') y = rand.normal(size=(300,50000)).astype('longdouble') 

this make each of steps more expensive, each step take same time.

see following comparison in system: enter image description here


Comments

Popular posts from this blog

jquery - How can I dynamically add a browser tab? -

node.js - Getting the socket id,user id pair of a logged in user(s) -

keyboard - C++ GetAsyncKeyState alternative -