Torch vs Numpy object : On CPU and GPU

[This tutorial is under development....Please don't use it for now]

We will do following comparison 

Numpy vs Torch object comparison on CPU

Numpy vs Torch object comparison on GPU

import numpy as np
import torch
from datetime import datetime
x = torch.rand(1500,2000)
y = torch.rand(2000, 3000)

device = 'cuda' if torch.cuda.is_available() else 'cpu'
print(device)

# torch computation on GPU
x = x.to(device)
y = y.to(device)
t1 = datetime.now()
for i in range(1000):
    z = x@y
print("GPU Time: ", (datetime.now() - t1).seconds)

# torch computation on cpu
x = x.cpu()
y = y.cpu()
t1 = datetime.now()
for i in range(1000):
    z = x@y
print("CPU Time: ", (datetime.now() - t1).seconds)

# computation on numpy
x = np.random.random((1500, 2000))
y = np.random.random((2000, 3000))
t1 = datetime.now()
for i in range(1000):
    z = np.matmul(x,y)
print("Numpy Time (CPU): ", (datetime.now() - t1).seconds)

Output:

cuda
GPU Time:  1 second
CPU Time:  35 seconds
Numpy Time (CPU):  121 seconds