C-space Samplers
The following are samplers that generates sampled configurations \(q \in C\) in C-Space with different strategies.
Random Policy Sampler
- class sbp_env.samplers.randomPolicySampler.RandomPolicySampler(random_method='pseudo_random', **kwargs)[source]
Bases:
Sampler
Uniformly and randomly samples configurations across \(d\) where \(d\) is the dimensionality of the C-Space.
RandomPolicySampler
samples configuration \(q \in \mathbb{R}^d\) across each dimension uniformly with an \(0 \le \epsilon < 1\) bias towds the goal configuration.A random number \(p \sim \mathcal{U}(0,1)\) is first drawn, then the configuration \(q_\text{new}\) that this function returns is given by
\[q_\text{new} = \begin{cases} q \sim \mathcal{U}(0,1)^d & \text{if } p < \epsilon\\ q_\text{target} & \text{otherwise.} \end{cases} \]CONSTANT
- Parameters
random_method (
str
, optional) –the kind of random method to use. Must be a choice from
randomness.SUPPORTED_RANDOM_METHODS
.Default:'pseudo_random'
kwargs – pass through to super class
- sbp_env.randomness.SUPPORTED_RANDOM_METHODS = ('pseudo_random', 'sobol_sequence', 'saltelli', 'latin_hypercube', 'finite_differences', 'fast')
Supported methods to draw random numbers in samplers. Supported random methods are as follows:
pseudo_random
from numpyPseudo Random
sobol_sequence
from SALibSobol sequence
saltelli
from SALibSaltelli’s extension of Sobol sequence
latin_hypercube
from SALibLatin hypercube
finite_differences
from SALibFinite differences
fast
from SALibFourier Amplitude Sensitivity Test (FAST)
Bidirectional Random Sampler
- class sbp_env.samplers.birrtSampler.BiRRTSampler(**kwargs)[source]
Bases:
Sampler
The sampler that is used internally by
BiRRTPlanner
. Internally,BiRRTSampler
usesRandomPolicySampler
to draw from its supported random methods. The main differences lies in the epsilon biasing when\[p \sim \mathcal{U}(0,1) < \epsilon,\]where the sampler will bias towards the correct start or goal tree depending on the current tree \(\mathcal{T}_\text{current}\) that
BiRRTSampler
is currently planning for (in contrast to only always biasing towards the goal tree).That is, \(p \sim \mathcal{U}(0,1)\) is first drawn, then \(q_\text{new}\) is given by
\[q_\text{new} = \begin{cases} q \sim \mathcal{U}(0,1)^d & \text{if } p < \epsilon\\ q_\text{target} & \text{if } \mathcal{T}_\text{current} \equiv \mathcal{ T}_{start}\\ q_\text{start} & \text{if } \mathcal{T}_\text{current} \equiv \mathcal{ T}_{target}\text{.} \end{cases} \]- set_use_radian(value=True)[source]
Overrides the super class method such that the value will be passed to the internal
samplers.randomPolicySampler.RandomPolicySampler
- Parameters
value (optional) – whether to use radian
Default:True
Informed Sampler
- class sbp_env.samplers.informedSampler.InformedSampler(**kwargs)[source]
Bases:
Sampler
The informed sampler is largely similar to
RandomPolicySampler
, excepts when an initial solution is found (i.e. when the current maximum cost \(\mathcal{ L}_\text{max} < \infty\)), the sampler will uses ellipsoidal heuristic to speed up convergence of solution cost.The sampled configuratioin \(q_\text{new}\) is given by
\[q_\text{new} = \begin{cases} \mathbf{C}\,\mathbf{L}\,q_\text{ball} + q_\text{center} & \text{if } \mathcal{L}_\text{ max} < \infty\\ q \sim \mathcal{U}(0,1)^d & \text{otherwise,} \end{cases} \]where \(q_\text{ball} \sim \mathcal{U}(\mathcal{Q}_\text{ball})\) is a uniform sample drawn from a unit \(n\)-ball, \(q_\text{center} = \frac{ q_\text{start} + q_\text{start}}{2}\) is the center location in-between the start and goal configuration, \(\mathbf{C} \in SO(d)\) is the rotational matrix to transforms from the hyperellipsoid frame to the world frame,
\[\mathbf{L} = \operatorname{diag}\left\{ \frac{\mathcal{L}_\text{max}}{2}, \frac{\sqrt{\mathcal{L}^2_\text{max} -\mathcal{L}^2_\text{min}}}{2}, \ldots, \frac{\sqrt{\mathcal{L}^2_\text{max} -\mathcal{L}^2_\text{min}}}{2} \right\} \]is the diagonal transformation matrix to maintain uniform distribution in the ellipse space, and the minimum cost is given by
\[\mathcal{L}_\text{min} = \lVert q_\text{start} -q_\text{target}\rVert_2 . \]
RRdT Particle Sampler
- class sbp_env.planners.rrdtPlanner.RRdTSampler(restart_when_merge=True, num_dtrees=4, **kwargs)[source]
Bases:
Sampler
Represents RRdT’s sampler
- get_next_pos()[source]
Retrieve next sampled position
- Returns
a sampled position, a callable to report success, and a callable to report failure
- get_random_choice()[source]
Get a random particle (disjointed tree) from the currently managed particiles
- Returns
Node from p_manager
- init(**kwargs)[source]
The delayed initialisation method
- Parameters
use_radian – whether this sampler should returns value in radian (as opposite to Euclidean)
start_pt (
Node
) – the starting configuration for the planning problemgoal_pt (
Node
) – the goal configuration for the planning problem
- random_walk(idx)[source]
Performs a random walk for the particle at the given index
- Parameters
idx (
int
) – the index of the particle
- random_walk_by_mouse()[source]
Random walk by mouse
Warning
For testing purpose. Mimic random walk, but do so via mouse click.
- report_fail(idx, **kwargs)[source]
Reports that the sampled position from the particle had failed
- Parameters
idx (
int
) – the index of the particle
PRM Sampler
- class sbp_env.samplers.prmSampler.PRMSampler(**kwargs)[source]
Bases:
RandomPolicySampler
This sampler is basically the same as
samplers.randomPolicySampler.RandomPolicySampler
, excepts we always overrides the goal bias epsilon \(\epsilon=0\) as PRM could not utilise the benefit of goal bias.- Parameters
random_method – the kind of random method to use. Must be a choice from
randomness.SUPPORTED_RANDOM_METHODS
.kwargs – pass through to super class
Likelihood Sampler
- class sbp_env.samplers.likelihoodPolicySampler.LikelihoodPolicySampler(prob_block_size, suppress_visited_area=True, **kwargs)[source]
Bases:
Sampler
This sampler discretise C-Space into equal cells, then each cells is consists of a probability value \(p\) that stores a likelihood value.
The probability \(p\) of a cell will be increased if \(q \in C_\text{free}\) is reported but the tree extension is failed due to visibility test (intermediate obstacles). The probability \(p\) will be dropped depending on its proximity to an existing tree node.
Note
This sampler currently only works with 2D Image Space as it’s hard-coded to sub-divide the space into grid cells. However, the idea is generic and should be applicable for \(d > 2\).
Note
This sampler is experimental for research purpose, and is currently incomplete.
- Parameters
prob_block_size (
int
) – the unit size for one probability blocksuppress_visited_area (
bool
, optional) –reduce the probability \(p\) if it is close to an existing tree node
Default:True
- _report_fail_impl(x, y, **kwargs)[source]
The internal implantation of the
report_fail()
.
- add_sample_line(x1, y1, x2, y2)[source]
Add a 2D sample line to denotes from a visibility line that are all in free space
- add_tree_node(pos, **kwargs)[source]
Report that a new tree node has been created
- Parameters
pos (
ndarray
) – the configuration of the new tree node
- report_fail(**kwargs)[source]
Report a failed sample
- Parameters
pos (numpy.ndarray) – the position of the random sample
- report_success(**kwargs)[source]
Report a successful sample
- Parameters
pos (numpy.ndarray) – the position of a newly created node
nn (Node) – the nearest existing node
rand_pos (numpy.ndarray) – the position of a successful random sample
Nearby Sampler
- class sbp_env.samplers.nearbyPolicySampler.NearbyPolicySampler(**kwargs)[source]
Bases:
LikelihoodPolicySampler
This sampler uses the same mechanism as
samplers.likelihoodPolicySampler.LikelihoodPolicySampler
to update probability \(p\). However, this sampler prioritise configurations that are closer to existing tree nodes (i.e. the tree structure).Note
This sampler currently only works with 2D Image Space as it’s hard-coded to sub-divide the space into grid cells. However, the idea is generic and should be applicable for \(d > 2\).
Note
This sampler is experimental for research purpose, and is currently incomplete.
- Parameters
prob_block_size – the unit size for one probability block
suppress_visited_area – reduce the probability \(p\) if it is close to an existing tree node
Mouse Sampler
- class sbp_env.samplers.mouseSampler.MouseSampler(**kwargs)[source]
Bases:
Sampler
A sampler that hooks into PyGame’s functionality to create sampled configurations with a mouse/touchpad, for testing if the motion planner is functioning correctly in creating tree edges or if certain bottlenecks within a map is accessibly.
Warning
This sampler is for testing purpose only, and hence will not be extended for \(d > 2\) or for collision checker other than PyGame.
Abstract Base Sampler
There is also a special base Sampler that all motion planner should be derived from.
- class sbp_env.samplers.baseSampler.Sampler(**kwargs)[source]
Abstract base sampler that defines each unique methods that some sampler, but not all samplers, uses.
- GetNextPosReturnType
The return type of
get_next_pos()
.alias of
Tuple
[ndarray
,Callable
,Callable
]
- add_sample_line(**kwargs)[source]
Report to the sampler about the entire line that was sampled last time
- Parameters
kwargs – pass through to derived class
- add_tree_node(**kwargs)[source]
Report to the sampler about the last node that was added to the tree
- Parameters
kwargs – pass through to derived class
- get_next_pos(**kwargs)[source]
Retrieve next sampled position
- Returns
a sampled position, a callable to report success, and a callable to report failure
- init(use_radian=False, **kwargs)[source]
The delayed initialisation method
- Parameters
use_radian (
bool
, optional) –whether this sampler should returns value in radian (as opposite to Euclidean)
Default:False
start_pt (
Node
) – the starting configuration for the planning problemgoal_pt (
Node
) – the goal configuration for the planning problem- report_fail(**kwargs)[source]
Report to the sampler that the last sample was unsuccessful. This function is sampler dependent.
- Parameters
kwargs – pass through to derived class