Random walk, also known as drunkard’s walk, is starting at some position and randomly taking the next step. In its simplest form, you can generate it in one-dimension using the following pseudocode:
Parameters:
-----------
T: number of steps to take
Initialise position = random position
for t = 1 to T-1 do
Set position = position + randomly choose between -1 and -1
end for
The result will look something like this. The plot below shows ten random walks for 5000 steps.
Python
This is how the random walker will look like in pure Python.
class Walker1D:
"""Random Walker in 1D space"""
def __init__(self, initial_position=0):
"""
Parameters
----------
initial_position: int, optional
The initial position of the walker in 1D space
"""
self.pos = initial_position
def move(self, pos):
"""Move the walker to next position
Parameters
----------
pos: int
The current position of the walker
"""
return pos + random.randint(-1,+1)
def walk(self, steps):
"""Take a random walk for given steps
Parameters
----------
steps: int
Number to steps to take during this walk
"""
pos = 0
trajectory = [pos]
for _ in range(0, steps):
new_pos = self.move(pos)
trajectory.append(new_pos)
pos = new_pos
return trajectory
Steps | Time (ms) |
---|---|
10^3 | 3.22 |
10^6 | 1,370 |
10^7 | 14,700 |
10^8 | 129,000 |
10^9 | 1,680,000 |
Julia
And this is the Julia counterpart. Here, we make use of struct.
"""
Encapsulates 1D walker
"""
struct Walker1D
x::Int64
end
"""
Moves the walker to the next position
# Arguments
- `w::Walker1D`: one-dimensional walker
"""
move(w::Walker1D) = Walker1D(w.x + rand((-1,1)))
"""
Makes the walker walk for T timesteps
# Arguments
- `w::Walker1D`: one-dimensional walker
- `T:Int64`: Number of timesteps in the walk
"""
function walk(w, T)
trajectory = [w]
for i ∈ 1:T
w = move(w)
push!(trajectory, deepcopy(w))
end
return trajectory
end
Steps | Time (ms) |
---|---|
10^3 | 1.917 |
10^6 | 20 |
10^7 | 196 |
10^8 | 3,370 |
10^9 | 359,000 |
Numba
The Julia micro benchmark, if we may call it, looks quite impressive. Its wayyy faster than its Python equivalent. Julia programs compile to efficient native code via LLVM⚡️. Is there any library or a tool that can do that for python🤔 ?
Enter Numba 🙌🏽, a JIT compiler that turns the Python code into fast native code. It uses LLVM⚡️ too, and so does Swift and Rust.
Sounds very good, so what’s the catch? Well, it only works on a subset of Python and Numpy code and that’s mostly we care really.
from numba import int32
from numba.experimental import jitclass
spec = [
('initial_position', int32),
('pos', int32),
]
@jitclass(spec)
class NumbaWalker1D:
"""Random Walker in 1D space"""
def __init__(self, initial_position=0):
"""
Parameters
----------
initial_position: int, optional
The initial position of the walker in 1D space
"""
self.initial_position = initial_position
def initialize(self):
self.pos = self.initial_position
def move(self, pos):
"""Move the walker to next position
Parameters
----------
pos: int
The current position of the walker
"""
return pos + random.randint(-1,+1)
def walk(self, steps):
"""Take a random walk for given steps
Parameters
----------
steps: int
Number to steps to take during this walk
"""
self.initialize()
trajectory = [self.pos]
for _ in range(0, steps):
new_pos = self.move(self.pos)
trajectory.append(new_pos)
pos = new_pos
return trajectory
Steps | Time (ms) |
---|---|
10^3 | 7.7e-5 |
10^6 | 58 |
10^7 | 539 |
10^8 | 5,300 |
10^9 | 85,000 |
Results
Putting it all in one table, we observe that while Julia code is significantly faster than pure Python, Numba was able to accelerate the pure Python code in this example. Numba jitted code was significantly faster than Julia when we increased the number of step to 10^9.
Steps | Python | Julia | Numba |
---|---|---|---|
10^3 | 3.22 | 1.917 | 7.7e-5 |
10^6 | 1,370 | 20 | 58 |
10^7 | 14,700 | 196 | 539 |
10^8 | 129,000 | 3,370 | 5,300 |
10^9 | 1,680,000 | 359,000 | 85,000 |
See also
- Exploration Log of Contrastive Language-Image Pre-training
- 📙 CIFAR-10 Classifiers: Part 7 - Speed up PyTorch hyperparameter search using Ray Tune
- 📙 CIFAR-10 Classifiers: Part 6 - Speed up TensorFlow hyperparameter search using Optuna
- 🧬 How long is the Human DNA?
- 📙 CIFAR-10 Classifiers: Part 5 - Speed up TensorFlow hyperparameter search using Ray Tune