# [Solved] Pytorch tensor to numpy array

I have a `pytorch` Tensor of size `torch.Size([4, 3, 966, 1296])`

I want to convert it to `numpy` array using the following code:

`imgs = imgs.numpy()[:, ::-1, :, :]`

Can anyone please explain what this code is doing ?

## Solution #1:

There are 4 dimensions of the tensor you want to convert.

``````[:, ::-1, :, :]
``````

`:` means that the first dimension should be copied as it is and converted, same goes for the third and fourth dimension.

`::-1` means that for the second axes it reverses the the axes

## Solution #2:

I believe you also have to use `.detach()`. I had to convert my Tensor to a numpy array on Colab which uses CUDA and GPU. I did it like the following:

``````# this is just my embedding matrix which is a Torch tensor object
embedding = learn.model.u_weight

embedding_list = list(range(0, 64382))

input = torch.cuda.LongTensor(embedding_list)
tensor_array = embedding(input)
# the output of the line below is a numpy array
tensor_array.cpu().detach().numpy()
``````

## Solution #3:

This worked for me:

``````np_arr = torch_tensor.cpu().detach().numpy()
``````

## Solution #4:

While other answers perfectly explained the question I will add some real life examples converting tensors to numpy array:

### Example: Shared storage

PyTorch tensor residing on CPU shares the same storage as numpy array `na`

``````import torch
a = torch.ones((1,2))
print(a)
na = a.numpy()
na=10
print(na)
print(a)
``````

Output:

``````tensor([[1., 1.]])
[[10.  1.]]
tensor([[10.,  1.]])
``````

### Example: Eliminate effect of shared storage, copy numpy array first

To avoid the effect of shared storage we need to `copy()` the numpy array `na` to a new numpy array `nac`. Numpy `copy()` method creates the new separate storage.

``````import torch
a = torch.ones((1,2))
print(a)
na = a.numpy()
nac = na.copy()
nac=10
?print(nac)
print(na)
print(a)
``````

Output:

``````tensor([[1., 1.]])
[[10.  1.]]
[[1. 1.]]
tensor([[1., 1.]])
``````

Now, just the `nac` numpy array will be altered with the line `nac=10`, `na` and `a` will remain as is.

### Example: CPU tensor with `requires_grad=True`

``````import torch
print(a)
na = a.detach().numpy()
na=10
print(na)
print(a)
``````

Output:

``````tensor([[1., 1.]], requires_grad=True)
[[10.  1.]]
``````

In here we call:

``````na = a.numpy()
``````

This would cause: `RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.`, because tensors that `require_grad=True` are recorded by PyTorch AD. Note that `tensor.detach()` is the new way for `tensor.data`.

This explains why we need to `detach()` them first before converting using `numpy()`.

### Example: CUDA tensor with `requires_grad=False`

``````a = torch.ones((1,2), device='cuda')
print(a)
na = a.to('cpu').numpy()
na=10
print(na)
print(a)
``````

Output:

``````tensor([[1., 1.]], device='cuda:0')
[[10.  1.]]
tensor([[1., 1.]], device='cuda:0')
``````

?

### Example: CUDA tensor with `requires_grad=True`

``````a = torch.ones((1,2), device='cuda', requires_grad=True)
print(a)
na = a.detach().to('cpu').numpy()
na=10
?print(na)
print(a)
``````

Output:

``````tensor([[1., 1.]], device='cuda:0', requires_grad=True)
[[10.  1.]]
``````

Without `detach()` method the error `RuntimeError: Can't call `numpy()` on Tensor that requires grad. Use tensor.detach().numpy() instead.` will be set.

Without `.to('cpu')` method `TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.` will be set.

You could use `cpu()` but instead of `to('cpu')` but I prefer the newer `to('cpu')`.

## Solution #5:

You can use this syntax if some grads are attached with your variables.

`y=torch.Tensor.cpu(x).detach().numpy()[:,:,:,-1]`

## Solution #6:

Your question is very poorly worded. Your code (sort of) already does what you want. What exactly are you confused about? `x.numpy()` answer the original title of your question:

Pytorch tensor to numpy array

Anyway, just in case this is useful to others. You might need to call detach for your code to work. e.g.

``````RuntimeError: Can't call numpy() on Variable that requires grad.
``````

So call `.detach()`. Sample code:

``````# creating data and running through a nn and saving it

import torch
import torch.nn as nn

from pathlib import Path
from collections import OrderedDict

import numpy as np

path = Path('~/data/tmp/').expanduser()
path.mkdir(parents=True, exist_ok=True)

num_samples = 3
Din, Dout = 1, 1
lb, ub = -1, 1

x = torch.torch.distributions.Uniform(low=lb, high=ub).sample((num_samples, Din))

f = nn.Sequential(OrderedDict([
('f1', nn.Linear(Din,Dout)),
('out', nn.SELU())
]))
y = f(x)

# save data
y.numpy()
x_np, y_np = x.detach().cpu().numpy(), y.detach().cpu().numpy()
np.savez(path / 'db', x=x_np, y=y_np)

print(x_np)
``````

cpu goes after detach. See: https://discuss.pytorch.org/t/should-it-really-be-necessary-to-do-var-detach-cpu-numpy/35489/5

Also I won’t make any comments on the slicking since that is off topic and that should not be the focus of your question. See this:

Understanding slice notation

The answers/resolutions are collected from stackoverflow, are licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 .