首先要说的是:如果它是关于同一个处理器上的多个核心,numpy
已经能够并行运行比我们所能用手工做的(见multiplication of large arrays in python讨论)
在这案件的关键是简单地确保乘法都在批发阵列操作来完成,而不是一个Python for
-loop:
test2 = x[n.newaxis, :] * y[:, n.newaxis]
n.abs(test - test2).max() # verify equivalence to mult(): output should be 0.0, or very small reflecting floating-point precision limitations
[如果你真的想在多个单独的CPU上传播这个,这是一个不同问题,但问题似乎建议单(多核心)CPU]
OK,轴承上面记:让我们假设你想要并行操作的不仅仅是mult()
更加复杂。假设您已经尽力将您的操作优化为批量数组操作,而这些批量操作可以并行处理,但是您的操作不会受此影响。在这种情况下,您可以使用使用lock=False
和multiprocessing.Pool
创建的共享内存multiprocessing.Array
来分配进程以处理非重叠块,并将其划分为y
维度(如果需要,还可以同时在x
之上)。下面提供了一个示例列表。请注意,这种方法并没有明确地完成您指定的内容(将结果放在一起并将它们追加到单个数组中)。相反,它可以提高效率:多个进程在共享内存的非重叠部分同时组合他们的答案部分。完成后,不需要整理/附加:我们只是读出结果。
import os, numpy, multiprocessing, itertools
SHARED_VARS = {} # the best way to get multiprocessing.Pool to send shared multiprocessing.Array objects between processes is to attach them to something global - see http://stackoverflow.com/questions/1675766/
def operate(slices):
# grok the inputs
yslice, xslice = slices
y, x, r = get_shared_arrays('y', 'x', 'r')
# create views of the appropriate chunks/slices of the arrays:
y = y[yslice]
x = x[xslice]
r = r[yslice, xslice]
# do the actual business
for i in range(len(r)):
r[i] = y[i] * x # If this is truly all operate() does, it can be parallelized far more efficiently by numpy itself.
# But let's assume this is a placeholder for something more complicated.
return 'Process %d operated on y[%s] and x[%s] (%d x %d chunk)' % (os.getpid(), slicestr(yslice), slicestr(xslice), y.size, x.size)
def check(y, x, r):
r2 = x[numpy.newaxis, :] * y[:, numpy.newaxis] # obviously this check will only be valid if operate() literally does only multiplication (in which case this whole business is unncessary)
print('max. abs. diff. = %g' % numpy.abs(r - r2).max())
return y, x, r
def slicestr(s):
return ':'.join('' if x is None else str(x) for x in [s.start, s.stop, s.step])
def m2n(buf, shape, typecode, ismatrix=False):
"""
Return a numpy.array VIEW of a multiprocessing.Array given a
handle to the array, the shape, the data typecode, and a boolean
flag indicating whether the result should be cast as a matrix.
"""
a = numpy.frombuffer(buf, dtype=typecode).reshape(shape)
if ismatrix: a = numpy.asmatrix(a)
return a
def n2m(a):
"""
Return a multiprocessing.Array COPY of a numpy.array, together
with shape, typecode and matrix flag.
"""
if not isinstance(a, numpy.ndarray): a = numpy.array(a)
return multiprocessing.Array(a.dtype.char, a.flat, lock=False), tuple(a.shape), a.dtype.char, isinstance(a, numpy.matrix)
def new_shared_array(shape, typecode='d', ismatrix=False):
"""
Allocate a new shared array and return all the details required
to reinterpret it as a numpy array or matrix (same order of
output arguments as n2m)
"""
typecode = numpy.dtype(typecode).char
return multiprocessing.Array(typecode, int(numpy.prod(shape)), lock=False), tuple(shape), typecode, ismatrix
def get_shared_arrays(*names):
return [m2n(*SHARED_VARS[name]) for name in names]
def init(*pargs, **kwargs):
SHARED_VARS.update(pargs, **kwargs)
if __name__ == '__main__':
ylen = 1000
xlen = 2000
init(y=n2m(range(ylen)))
init(x=n2m(numpy.random.rand(xlen)))
init(r=new_shared_array([ylen, xlen], float))
print('Master process ID is %s' % os.getpid())
#print(operate([slice(None), slice(None)])); check(*get_shared_arrays('y', 'x', 'r')) # local test
pool = multiprocessing.Pool(initializer=init, initargs=SHARED_VARS.items())
yslices = [slice(0,333), slice(333,666), slice(666,None)]
xslices = [slice(0,1000), slice(1000,None)]
#xslices = [slice(None)] # uncomment this if you only want to divide things up in the y dimension
reports = pool.map(operate, itertools.product(yslices, xslices))
print('\n'.join(reports))
y, x, r = check(*get_shared_arrays('y', 'x', 'r'))
[mcve]怎么样? – boardrider
你永远不会编程在Python比numpy的内部广播机制更快的东西,哪怕是多线程/进程......让numpy的做内部 – Aaron
要小心,不要使用多线程/进程为它着想。对大量数据进行少量工作只会导致CPU受内存总线速度的影响(与CPU的高速缓存相比,速度较慢)。所以如果你的算法是I/O约束的,添加更多的线程不会导致速度的提高。 – bazza