API Reference
This page contains the complete API reference for HyperdimensionalComputing.jl.
Index
HyperdimensionalComputing.BinaryHVHyperdimensionalComputing.TernaryHVBase.isapproxBase.isapproxHyperdimensionalComputing.bindsequenceHyperdimensionalComputing.bipol2gradHyperdimensionalComputing.bundlesequenceHyperdimensionalComputing.convertlevelHyperdimensionalComputing.crossproductHyperdimensionalComputing.decodelevelHyperdimensionalComputing.encodelevelHyperdimensionalComputing.encodelevelHyperdimensionalComputing.grad2bipolHyperdimensionalComputing.graphHyperdimensionalComputing.hashtableHyperdimensionalComputing.levelHyperdimensionalComputing.multibindHyperdimensionalComputing.multisetHyperdimensionalComputing.nearest_neighborHyperdimensionalComputing.ngramsHyperdimensionalComputing.similarityHyperdimensionalComputing.similarityHyperdimensionalComputing.similarityHyperdimensionalComputing.δ
Functions
Base.isapprox — Method
Base.isapprox(u::AbstractHV, v::AbstractHV, atol=length(u)/100, ptol=0.01)Measurures when two hypervectors are similar (have more elements in common than expected by chance) using the Hamming distance. Uses a bootstrap to construct a null distribution.
One can specify either:
ptol=1e-10threshold for seeing that many matches due to chanceN_bootstap=200number of samples for bootstrapping
Base.isapprox — Method
Base.isapprox(u::AbstractHV, v::AbstractHV, atol=length(u)/100, ptol=0.01)Measurures when two hypervectors are similar (have more elements in common than expected by chance).
One can specify either:
atol=N/100number of matches more than due to chance needed for being assumed similarptol=0.01threshold for seeing that many matches due to chance
HyperdimensionalComputing.bindsequence — Method
bindsequence(vs::AbstractVector{<:AbstractHV})Binding-based sequence. The first value is not permuted, the last value is permuted n-1 times.
Arguments
vs::AbstractVector{<:AbstractHV}: Hypervector sequence
Examples
julia> vs = [BinaryHV(10) for _ in 1:10]
10-element Vector{BinaryHV}:
[0, 1, 0, 0, 1, 1, 0, 1, 1, 1]
[1, 0, 1, 0, 0, 0, 1, 1, 1, 0]
[0, 1, 0, 1, 0, 0, 1, 0, 0, 0]
[0, 1, 0, 1, 1, 0, 1, 0, 1, 0]
[1, 1, 0, 0, 1, 0, 1, 1, 1, 0]
[0, 0, 1, 1, 1, 1, 0, 0, 1, 0]
[0, 0, 0, 0, 1, 1, 0, 1, 1, 0]
[1, 1, 1, 0, 1, 1, 0, 0, 1, 1]
[1, 1, 0, 1, 1, 0, 0, 0, 0, 0]
[1, 1, 0, 0, 0, 1, 1, 0, 0, 0]
julia> bindsequence(vs)
10-element BinaryHV:
0
1
1
0
1
1
0
1
1
1Extended help
This encoding is based on the following mathematical notation:
\[\otimes_{i=1}^{m} \Pi(V_i, i-1)\]
where V is the hypervector collection, m is the size of the hypervector collection, i is the position of the entry in the collection, and \otimes and \Pi are the binding and shift operations.
References
See also
bundlesequence: Bundle-sequence encoding, bundling-variant of this encoder
HyperdimensionalComputing.bipol2grad — Method
bipol2grad(x::Number)
Maps a bipolar number in [-1, 1] to the [0, 1] interval.
HyperdimensionalComputing.bundlesequence — Method
bundlesequence(vs::AbstractVector{<:AbstractHV})Bundling-based sequence. The first value is not permuted, the last value is permuted n-1 times.
Arguments
vs::AbstractVector{<:AbstractHV}: Hypervector sequence
Examples
julia> vs = [BinaryHV(10) for _ in 1:10]
10-element Vector{BinaryHV}:
[0, 1, 0, 0, 1, 1, 0, 1, 1, 1]
[1, 0, 1, 0, 0, 0, 1, 1, 1, 0]
[0, 1, 0, 1, 0, 0, 1, 0, 0, 0]
[0, 1, 0, 1, 1, 0, 1, 0, 1, 0]
[1, 1, 0, 0, 1, 0, 1, 1, 1, 0]
[0, 0, 1, 1, 1, 1, 0, 0, 1, 0]
[0, 0, 0, 0, 1, 1, 0, 1, 1, 0]
[1, 1, 1, 0, 1, 1, 0, 0, 1, 1]
[1, 1, 0, 1, 1, 0, 0, 0, 0, 0]
[1, 1, 0, 0, 0, 1, 1, 0, 0, 0]
julia> bundlesequence(vs)
10-element BinaryHV:
0
1
0
0
0
0
0
0
1
1Extended help
This encoding is based on the following mathematical notation:
\[\oplus_{i=1}^{m} \Pi(V_i, i-1)\]
where V is the hypervector collection, m is the size of the hypervector collection, i is the position of the entry in the collection, and \oplus and \Pi are the bundling and shift operations.
References
See also
bindsequence: Binding-sequence encoding, binding-variant of this encoder
HyperdimensionalComputing.convertlevel — Method
convertlevel(hvlevels, numvals..., kwargs...)Creates the encoder and decoder for a level incoding in one step. See encodelevel and decodelevel for their respective documentations.
HyperdimensionalComputing.crossproduct — Method
crossproduct(U::T, V::T) where {T <: AbstractVector{<:AbstractHV}}Cross product between two sets of hypervectors.
Arguments
U::AbstractVector{<:AbstractHV}: HypervectorsV::AbstractVector{<:AbstractHV}: Hypervectors
Examples
julia> us = [BinaryHV(10) for _ in 1:5]
5-element Vector{BinaryHV}:
[1, 1, 1, 1, 1, 0, 0, 1, 0, 0]
[0, 1, 0, 0, 1, 1, 1, 0, 1, 0]
[0, 1, 1, 1, 1, 0, 0, 1, 1, 1]
[0, 1, 1, 0, 1, 0, 1, 1, 1, 0]
[1, 0, 0, 1, 0, 0, 1, 1, 1, 1]
julia> vs = [BinaryHV(10) for _ in 1:5]
5-element Vector{BinaryHV}:
[0, 1, 1, 1, 1, 1, 0, 1, 0, 0]
[0, 0, 1, 0, 0, 1, 1, 0, 0, 1]
[0, 0, 0, 0, 1, 0, 0, 1, 0, 1]
[1, 0, 1, 1, 0, 1, 1, 1, 1, 1]
[1, 0, 1, 0, 0, 1, 0, 1, 0, 1]
julia> crossproduct(us, vs)
10-element BinaryHV:
0
0
1
0
1
0
0
1
0
1Extended help
This encoding strategy first creates a multiset from both input hypervector sets, which are then bound together to generate all cross products, i.e.
U₁ × V₁ + U₁ × V₂ + ... + U₁ × Vₘ + ... + Uₙ × Vₘ
This encoding is based on the following formula:
\[(\oplus_{i=1}^{m} U_i) \otimes (\oplus_{i=1}^{n} V_i)\]
where U and V are collections of hypervectors, m and n are the sizes of the U and V collections, ì is the position in the hypervector collection, and \oplus and \otimes are the bundling and binding operations.
References
HyperdimensionalComputing.decodelevel — Method
decodelevel(hvlevels::AbstractVector{<:AbstractHV}, numvalues)Generate a decoding function based on level, for decoding numerical values. It returns a function that gives the numerical value for a given hypervector, based on similarity matching.
Arguments
- hvlevels::AbstractVector{<:AbstractHV}: vector of hypervectors representing the level encoding
- numvalues: the range or vector with the corresponding numerical values
Example
numvalues = range(0, 2pi, 100)
hvlevels = level(BipolarHV(), 100)
decoder = decodelevel(hvlevels, numvalues)
decoder(hvlevels[17]) # value that closely matches the corresponding HVHyperdimensionalComputing.encodelevel — Method
encodelevel(hvlevels::AbstractVector{<:AbstractHV}, numvalues; testbound=false)Generate an encoding function based on level, for encoding numerical values. It returns a function that gives the corresponding hypervector for a given numerical input.
Arguments
- hvlevels::AbstractVector{<:AbstractHV}: vector of hypervectors representing the level encoding
- numvalues: the range or vector with the corresponding numerical values
- [testbound=false]: optional keyword argument to check whether the provided value is in bounds
Example
numvalues = range(0, 2pi, 100)
hvlevels = level(BipolarHV(), 100)
encoder = encodelevel(hvlevels, numvalues)
encoder(pi/3) # hypervector that best represents this numerical valueHyperdimensionalComputing.encodelevel — Method
encodelevel(hvlevels::AbstractVector{<:AbstractHV}, a::Number, b::Number; testbound=false)See encodelevel, same but provide lower (a) and upper (b) limit of the interval to be encoded.
HyperdimensionalComputing.grad2bipol — Method
grad2bipol(x::Number)Maps a graded number in [0, 1] to the [-1, 1] interval.
HyperdimensionalComputing.graph — Method
graph(source::T, target::T, directed::Bool = false)Graph for source-target pairs. Can be directed or undirected.
Arguments
source::T: Source node hypervectorstarget::T: Target node hypervectorsdirected::Bool = false: Whether the graph is directed or not
Example
Extended help
This encoding is based on the following mathematical notation:
Undirected graphs
\[\otimes_{i=1}^{m} S_i \otimes T_i\]
Directed graphs
\[\otimes_{i=1}^{m} S_i \otimes \Pi(T_i)\]
where K and V are the key and value hypervector collections, m is the size of the hypervector collection, i is the position of the entry in the collection, and \otimes, \oplus and \Pi are the binding, bundling and shift operations.
See also
hashtable: Hash table encoding, underlying encoding strategy of this encoder.
References
HyperdimensionalComputing.hashtable — Method
hashtable(keys::T, values::T) where {T <: AbstractVector{<:AbstractHV}}Hash table from keys-values hypervector pairs. Keys and values must be the same length in order to encode as hypervector.
Arguments
keys::AbstractVector{<:AbstractHV}: Keys hypervectorsvalues::AbstractVector{<:AbstractHV}: Values hypervectors
Example
julia> ks = [BinaryHV(10) for _ in 1:5]
5-element Vector{BinaryHV}:
[0, 0, 0, 1, 0, 1, 1, 0, 0, 0]
[1, 0, 1, 0, 1, 0, 1, 0, 1, 1]
[0, 0, 0, 0, 1, 1, 1, 0, 1, 1]
[1, 0, 0, 0, 0, 1, 1, 0, 1, 0]
[0, 1, 1, 1, 1, 0, 0, 1, 1, 1]
julia> vs = [BinaryHV(10) for _ in 1:5]
5-element Vector{BinaryHV}:
[0, 1, 0, 0, 1, 1, 0, 1, 0, 0]
[1, 0, 1, 1, 1, 0, 1, 0, 0, 0]
[0, 0, 0, 0, 0, 1, 1, 0, 1, 0]
[0, 1, 1, 0, 0, 0, 0, 1, 0, 1]
[0, 1, 1, 0, 0, 0, 1, 1, 0, 1]
julia> hashtable(ks, vs)
10-element BinaryHV:
0
0
0
1
1
0
1
0
1
1Extended help
This encoding is based on the following mathematical notation:
\[\oplus_{i=1}^{m} K_i \otimes V_i\]
where K and V are the key and value hypervector collections, m is the size of the hypervector collection, i is the position of the entry in the collection, and \otimes and \oplus are the binding and bundling operations.
References
HyperdimensionalComputing.level — Method
level(v::HV, n::Int) where {HV <: AbstractHV}
level(HV::Type{<:AbstractHV}, n::Int; D::Int = 10_000)Creates a set of level correlated hypervectors, where the first and last hypervectors are quasi-orthogonal.
Arguments
v::HV: Base hypervectorn::Int: Number of levels (alternatively, provide a vector to be encoded)
HyperdimensionalComputing.multibind — Method
multibind(vs::AbstractVector{<:AbstractHV})Binding of multiple hypervectors, binds all the input hypervectors together.
Arguments
vs::AbstractVector{<:AbstractHV}: Hypervectors
Examples
julia> vs = [BinaryHV(10) for _ in 1:10]
10-element Vector{BinaryHV}:
[0, 1, 0, 0, 1, 1, 0, 1, 1, 1]
[1, 0, 1, 0, 0, 0, 1, 1, 1, 0]
[0, 1, 0, 1, 0, 0, 1, 0, 0, 0]
[0, 1, 0, 1, 1, 0, 1, 0, 1, 0]
[1, 1, 0, 0, 1, 0, 1, 1, 1, 0]
[0, 0, 1, 1, 1, 1, 0, 0, 1, 0]
[0, 0, 0, 0, 1, 1, 0, 1, 1, 0]
[1, 1, 1, 0, 1, 1, 0, 0, 1, 1]
[1, 1, 0, 1, 1, 0, 0, 0, 0, 0]
[1, 1, 0, 0, 0, 1, 1, 0, 0, 0]
julia> multibind(vs)
10-element BinaryHV:
1
1
1
0
1
1
1
0
1
0Extended help
This encoding is based on the following mathematical notation:
\[\otimes_{i=1}^{m} V_i\]
where V is the hypervector collection, m is the size of the hypervector collection, i is the position of the entry in the collection, and \otimes is the binding operation.
References
See also
multiset: Multiset encoding, bundling-variant of this encoder
HyperdimensionalComputing.multiset — Method
multiset(vs::AbstractVector{<:T})::T where {T <: AbstractHV}Multiset of input hypervectors, bundles all the input hypervectors together.
Arguments
vs::AbstractVector{<:AbstractHV}: Hypervectors
Example
julia> vs = [BinaryHV(10) for _ in 1:10]
10-element Vector{BinaryHV}:
[0, 1, 0, 0, 1, 1, 0, 1, 1, 1]
[1, 0, 1, 0, 0, 0, 1, 1, 1, 0]
[0, 1, 0, 1, 0, 0, 1, 0, 0, 0]
[0, 1, 0, 1, 1, 0, 1, 0, 1, 0]
[1, 1, 0, 0, 1, 0, 1, 1, 1, 0]
[0, 0, 1, 1, 1, 1, 0, 0, 1, 0]
[0, 0, 0, 0, 1, 1, 0, 1, 1, 0]
[1, 1, 1, 0, 1, 1, 0, 0, 1, 1]
[1, 1, 0, 1, 1, 0, 0, 0, 0, 0]
[1, 1, 0, 0, 0, 1, 1, 0, 0, 0]
julia> multiset(vs)
10-element BinaryHV:
0
1
0
0
1
0
0
0
1
0Extended help
This encoding is based on the following mathematical notation:
\[\oplus_{i=1}^{m} V_i\]
where V is the hypervector collection, m is the size of the hypervector collection, i is the position of the entry in the collection, and \oplus is the bundling operation.
References
See also
multibind: Multibind encoding, binding-variant of this encoder
HyperdimensionalComputing.nearest_neighbor — Method
nearest_neighbor(u::AbstractHV, collection[, k::Int]; kwargs...)Returns the element of collection that is most similar to u.
Function outputs (τ, i, xi) with τ the highest similarity value, i the index (or key if collection is a dictionary) of the closest neighbor and xi the closest vector. kwargs is an optional argument for the similarity search.
If a number k is given, the k closest neighbor are returned, as a sorted list of (τ, i).
HyperdimensionalComputing.ngrams — Function
ngrams(vs::AbstractVector{<:AbstractHV}, n::Int = 3)Creates a hypervector with the n-gram statistics of the input.
Arguments
vs::AbstractVector{<:AbstractHV}: Hypervector collectionn::Int = 3: n-gram size
Examples
julia> vs = [BinaryHV(10) for _ in 1:10]
10-element Vector{BinaryHV}:
[0, 1, 0, 0, 1, 0, 1, 0, 0, 1]
[0, 0, 1, 1, 1, 0, 0, 1, 1, 1]
[0, 0, 1, 0, 0, 1, 1, 1, 0, 0]
[1, 0, 0, 0, 1, 0, 0, 1, 1, 1]
[0, 1, 0, 0, 1, 0, 1, 1, 0, 0]
[1, 1, 1, 0, 1, 0, 0, 1, 0, 1]
[1, 0, 0, 0, 1, 0, 0, 1, 1, 0]
[0, 1, 0, 1, 0, 0, 0, 1, 1, 0]
[0, 0, 0, 1, 1, 1, 1, 0, 0, 1]
[1, 1, 0, 0, 0, 1, 1, 1, 0, 1]
julia> ngrams(vs)
10-element BinaryHV:
1
1
1
1
1
1
0
1
0
1Extended help
This encoding is defined by the following mathematical notation:
\[\oplus_{i=1}^{m-n}\otimes_{j=1}^{n-1}\Pi^{n-j-1}(V_{i+j})\]
where V is the collection of hypervectors, m is the number of hypervectors in the collection V, n is the window size, i is the position in the sequence, j is the position in the n-gram, and \oplus, \otimes and \Pi are the bundling, binding and shift operations.
See also
multiset: Multiset encoding, equivalent tongram(vs, 1)bindsequence: Bind-sequence encoding, equivalent tongram(vs, length(vs))
References
HyperdimensionalComputing.similarity — Method
similarity(u::AbstractHV; [method])Create a function that computes the similarity between its argument and uusingsimilarity, i.e. a function equivalent tov -> similarity(u, v)`.
HyperdimensionalComputing.similarity — Method
similarity(u::AbstractVector, v::AbstractVector; method::Symbol)Computes similarity between two (hyper)vectors using a method ∈ [:cosine, :jaccard, :hamming]. When no method is given, a default is used (cosine for vectors that can have negative elements and Jaccard for those that only have positive elements).
HyperdimensionalComputing.similarity — Method
similarity(hvs::AbstractVector{<:AbstractHV}; [method])Computes the similarity matrix for a vector of hypervectors using the similarity metrics defined by the pairwise version of similarity.
HyperdimensionalComputing.δ — Function
δ(u::AbstractHV, v::AbstractHV; [method])
δ(u::AbstractHV; [method])
δ(hvs::AbstractVector{<:AbstractHV}; [method])Alias for similarity. See similarity for the main documentation.
Types
HyperdimensionalComputing.BinaryHV — Type
BinaryHVA ternary hypervector type based on the Binary Splatter Code (BSC) vector symbolic architecture (Kanerva, 1994; Kanerva, 1995; Kanerva, 1996; Kanerva, 1997).
Represents a hypervector boolean elements, i.e. (false, true).
Extended help
References
- Kanerva, P. (1994). The Spatter Code for Encoding Concepts at Many Levels. In International Conference on Artificial Neural Networks (ICANN), pages 226–229.
- Kanerva, P. (1995). A Family of Binary Spatter Codes. In International Conference on Artificial Neural Networks (ICANN), pages 517–522.
- Kanerva, P. (1996). Binary Spatter-Coding of Ordered K-tuples. In International Conference on Artificial Neural Networks (ICANN), volume 1112 of Lecture Notes in Computer Science, pages 869–873.
- Kanerva, P. (1997). Fully Distributed Representation. In Real World Computing Symposium (RWC), pages 358–365.
HyperdimensionalComputing.TernaryHV — Type
TernaryHVA ternary hypervector type based on the Multiply-Add-Permute (MAP) vector symbolic architecture (Gayler, 1998).
Represents a hypervector with elements in (-1, 1).
Extended help
References
- Gayler, R. W. (1998). Multiplicative Binding, Representation Operators & Analogy. In Advances in Analogy Research: Integration of Theory and Data from the Cognitive, Computational, and Neural Sciences, pages 1–4.