There is this How do you split a list into evenly sized chunks?
for splitting an array into chunks. Is there anyway to do this more efficiently for giant arrays using Numpy?

From the documentation:

``````>>> x = np.arange(8.0)
>>> np.array_split(x, 3)
[array([ 0.,  1.,  2.]), array([ 3.,  4.,  5.]), array([ 6.,  7.])]
``````

Identical to `numpy.split`, but won’t raise an exception if the groups aren’t equal length.

If number of chunks > len(array) you get blank arrays nested inside, to address that – if your split array is saved in `a`, then you can remove empty arrays by:

``````[x for x in a if x.size > 0]
``````

Just save that back in `a` if you wish.

Just some examples on usage of `array_split`, `split`, `hsplit` and `vsplit`:

``````n : a = np.random.randint(0,10,[4,4])

In : a
Out:
array([[2, 2, 7, 1],
[5, 0, 3, 1],
[2, 9, 8, 8],
[5, 7, 7, 6]])
``````

Some examples on using `array_split`:
If you give an array or list as second argument you basically give the indices (before) which to ‘cut’

``````# split rows into 0|1 2|3
In : np.array_split(a, [1,3])
Out:
[array([[2, 2, 7, 1]]),
array([[5, 0, 3, 1],
[2, 9, 8, 8]]),
array([[5, 7, 7, 6]])]

# split columns into 0| 1 2 3
In : np.array_split(a, , axis=1)
Out:
[array([,
,
,
]),
array([[2, 7, 1],
[0, 3, 1],
[9, 8, 8],
[7, 7, 6]])]
``````

An integer as second arg. specifies the number of equal chunks:

``````In : np.array_split(a, 2, axis=1)
Out:
[array([[2, 2],
[5, 0],
[2, 9],
[5, 7]]),
array([[7, 1],
[3, 1],
[8, 8],
[7, 6]])]
``````

`split` works the same but raises an exception if an equal split is not possible

In addition to `array_split` you can use shortcuts `vsplit` and `hsplit`.
`vsplit` and `hsplit` are pretty much self-explanatry:

``````In : np.vsplit(a, 2)
Out:
[array([[2, 2, 7, 1],
[5, 0, 3, 1]]),
array([[2, 9, 8, 8],
[5, 7, 7, 6]])]

In : np.hsplit(a, 2)
Out:
[array([[2, 2],
[5, 0],
[2, 9],
[5, 7]]),
array([[7, 1],
[3, 1],
[8, 8],
[7, 6]])]
``````

I believe that you’re looking for `numpy.split` or possibly `numpy.array_split` if the number of sections doesn’t need to divide the size of the array properly.

Not quite an answer, but a long comment with nice formatting of code to the other (correct) answers. If you try the following, you will see that what you are getting are views of the original array, not copies, and that was not the case for the accepted answer in the question you link. Be aware of the possible side effects!

``````>>> x = np.arange(9.0)
>>> a,b,c = np.split(x, 3)
>>> a
array([ 0.,  1.,  2.])
>>> a = 8
>>> a
array([ 0.,  8.,  2.])
>>> x
array([ 0.,  8.,  2.,  3.,  4.,  5.,  6.,  7.,  8.])
>>> def chunks(l, n):
...     """ Yield successive n-sized chunks from l.
...     """
...     for i in xrange(0, len(l), n):
...         yield l[i:i+n]
...
>>> l = range(9)
>>> a,b,c = chunks(l, 3)
>>> a
[0, 1, 2]
>>> a = 8
>>> a
[0, 8, 2]
>>> l
[0, 1, 2, 3, 4, 5, 6, 7, 8]
``````

How about this? Here you split the array using the length you want to have.

``````a = np.random.randint(0,10,[4,4])

a
Out:
array([[1, 5, 8, 7],
[3, 2, 4, 0],
[7, 7, 6, 2],
[7, 4, 3, 0]])

a[0:2,:]
Out:
array([[1, 5, 8, 7],
[3, 2, 4, 0]])

a[2:4,:]
Out:
array([[7, 7, 6, 2],
[7, 4, 3, 0]])
``````

This can be achieved using `as_strided` of numpy. I have put a spin to answer by assuming that if chunk size is not a factor of total number of rows, then rest of the rows in the last batch will be filled with zeros.

``````from numpy.lib.stride_tricks import as_strided
def batch_data(test, chunk_count):
m,n = test.shape
S = test.itemsize
if not chunk_count:
chunk_count = 1
batch_size = m//chunk_count
# Batches which can be covered fully
test_batches = as_strided(test, shape=(chunk_count, batch_size, n), strides=(batch_size*n*S,n*S,S)).copy()
covered = chunk_count*batch_size
if covered < m:
rest = test[covered:,:]
rm, rn = rest.shape
mismatch = batch_size - rm
last_batch = np.vstack((rest,np.zeros((mismatch,rn)))).reshape(1,-1,n)
return np.vstack((test_batches,last_batch))
return test_batches
``````

This is based on my answer https://stackoverflow.com/a/68238815/5462372.