fab.learner package

Subpackages

Submodules

fab.learner.calc_fic module

fab.learner.calc_fic.calc_fic_comps_penalty(comps, vposterior_prob, fisher_coeffs=1.0)

Calculates a FIC penalty term of component complexities.

Parameters:
compslist[FABComponent]

List of component objects. Parameter dimensionalities are referred to.

vposterior_probnp.ndarray, size = (num_samples, num_comps)

Variational posterior matrix.

fisher_coeffsfloat or np.ndarray, size = (num_samples, num_comps) or

size = (num_samples, num_targets, num_comps)

If 1.0 [default], the number of expected samples is simply applied. For calculating scaled FIC value, a coefficient matrix or scalar value to vposterior must be given.

Returns:
ficfloat

The calculated FIC complexity of all components.

fab.learner.calc_fic.calc_fic_hme_lvprior_penalty(lvprior, vposterior_on_gates, fisher_coeffs=1.0)

Calculates a FIC penalty term of lvprior complexities.

Parameters:
lvpriorHMEBinaryTreeLVPrior

Latent variable prior object. The tree structure of lvprior and the parameter dimensionality of each gate are referred to.

vposterior_on_gatesnp.ndarray, size = (num_samples, num_gates)

Cumulated variational posterior matrix for all gates.

fisher_coeffsfloat or np.ndarray, size = (num_samples, num_gates)

If 1.0 [default], the number of expected samples is simply applied. For calculating scaled FIC value, a coefficient matrix or scalar value to vposterior must be given.

Returns:
ficfloat

The calculated FIC complexity of all gates.

fab.learner.calc_fic.calc_fic_loglikelihood_and_entropy(vposterior_prob, loglikelihood_compwise)

Calculates FIC terms of log-likelihood and entropy.

Parameters:
vposterior_probnp.ndarray, size = (num_samples, num_comps)

Variational posterior matrix.

loglikelihood_compwisenp.ndarray, size = (num_samples, num_comps)

Component-wise log-likelihood values for each sample.

Returns:
ficfloat

The calculated FIC value of loglikelihood and entropy.

fab.learner.calc_fic.calc_logit_fisher_coeffs(loglikelihood)

Calculates Fisher coefficients for logistic function.

Parameters:
loglikelihoodnp.ndarray, size = (num_samples, num_XXX) or

size = (num_samples, num_XXX, num_YYY)

Calculates FIC complexity of lvprior. XXX Log-likelihood matrix of components or lvprior.

Returns:
fisher_coeffsnp.ndarray, size = (num_samples, num_XXX) or

size = (num_samples, num_XXX, num_YYY)

The calculated Fisher coefficients for logistic function.

fab.learner.calc_fic.calc_scaled_num_expect_samples(vposterior_prob, fisher_coeffs)

Calculates scaled num expect samples.

Parameters:
vposterior_probnp.ndarray, size = (num_samples, num_comps)

Variational posterior matrix.

fisher_coeffsfloat or np.ndarray, size = (num_samples, num_comps) or

size = (num_samples, num_comps, num_targets)

Specified scale coefficient is considered to the number of expected sampled in the L0-regularize term.

Returns:
num_expect_samplesnp.ndarray, size = (num_samples, num_comps)

The calculated scaled num expect samples.

fab.learner.calc_random_params module

fab.learner.calc_random_params.calc_random_bias(data, bias_min_scale, bias_max_scale)

Generates random bias value in the range of specified input data and coefficients.

Parameters:
dataSupervisedData

Input data whose target data must be single target.

bias_min_scale, bias_max_scalefloat

Scale values for initialization of bias. Domain = (-inf, inf).

Returns:
biasfloat

Initialized bias value for a component.

fab.learner.calc_random_params.calc_random_variance(data, variance_min_scale, variance_max_scale)

Generates random variance value in the range of specified input data and coefficients.

Parameters:
dataSupervisedData

Input data whose target data must be single target.

variance_min_scale, variance_max_scalefloat

Scale values for initialization of variance. Domain = (-inf, inf).

Returns:
variancefloat

Initialized variance value for a component.

fab.learner.calc_random_params.calc_random_weights(data, weights_min_scale, weights_max_scale)

Generates random weight values in the range of specified input data and coefficients.

Parameters:
dataSupervisedData

Input data whose target data must be single target.

weights_min_scale, weights_max_scalefloat

Scale values for initialization of weights. Domain = (-inf, inf).

Returns:
weightsnp.array, size = (num_features)

Initialized weight values for a component.

fab.learner.context module

class fab.learner.context.LearningContext(num_comps)

Bases: object

A context class which holds context information in the FAB learning process.

Parameters:
num_compsint

Number of component at the initial state.

Attributes:
fic_historylist[float], size = (num_fab_steps)

History of FIC values.

num_comps_historylist[int], size = (num_fab_steps)

History of the number of remained (un-shrunk) components.

Methods

append(fic, num_comps)

Appends FIC and the number of components to corresponding history variables.

append(fic, num_comps)

Appends FIC and the number of components to corresponding history variables.

Parameters:
ficfloat

FIC value to be appended.

num_compsint

The number of remained (un-shrunk) components to be appended.

Returns:
None

fab.learner.data module

class fab.learner.data.SupervisedData(X, Y, feature_ids, mandatory_feature_mask=None, constraint_signs=None)

Bases: object

A data object class for supervised learnings.

Parameters:
Xnp.ndarray, size = (num_samples, num_features)

Input feature data to be held.

Ynp.array, size = (num_samples) or

np.ndarray, size = (num_samples, num_targets) Input target data to be held.

feature_idslist[int]

Feature ID number for each features.

mandatory_feature_maskNone or np.array(bool), size = (num_features)

The mask of the mandatory relevant features for component. optional [default: None]

constraint_signsNone or np.array({1, 0, -1}), size = (num_features)

Sign constraints for weight values. For instance, 1 means positive constraint. Zero means no-constraint. [default: None]

Attributes:
num_features

Returns the number of features.

num_samples

Returns the number of samples.

num_targets

Returns ——- num_targets : int Number of targets.

property num_features

Returns the number of features.

Returns:
num_featuresint

Number of features.

property num_samples

Returns the number of samples.

Returns:
num_samplesint

Number of samples.

property num_targets
Returns:
num_targetsint

Number of targets.

fab.learner.data.make_constraint_signs(feature_ids, positive_ids, negative_ids)

Makes an array indicating the sign constraint.

Parameters:
feature_idslist[int]

Feature ID number for each feature.

positive_idslist[int]

List of feature IDs which positive constraints on the weight values for all components is applied to.

negative_idslist[int]

List of feature IDs which negative constraints on the weight values for all components is applied to.

Returns:
constraint_signsnp.array(int), size = (num_features)

Array indicating the sign constraint for all features. For instance, 1 means positive constraint. 0 means no-constraint.

fab.learner.data.make_mask_for_ids(feature_ids, split_indices)

Makes feature mask from split indices.

Parameters:
feature_idslist[int]

Feature ID number for each feature.

split_indicesNone or list[int]

The feature indices which have True in the output mask.

Returns:
feature_masknp.array(bool), size = (num_features)

Feature mask generated from the specified feature information.

fab.learner.is_stop module

fab.learner.is_stop.is_stop_fab_iteration(context, threshold, is_ratio=False)

Determines the convergence of the FAB iteration.

If the following two conditions are satisfied simultaneously, it is judged to have converged in the current iteration.

  1. No component is shrunk in the current iteration, and

  2. The increase in FIC-value (\(FIC^{(t)} - FIC^{(t-1)}\)) is less than the threshold value.

If is_ratio is True, the convergence is checked not by absolute FIC difference (\(FIC^{(t)} - FIC^{(t-1)}\)) but by FIC increase ratio: (\(\{FIC^{(t)} - FIC^{(t-1)}\} / | FIC^{(t-1)} |\)).

Parameters:
contextLearningContext

Context information to be judged for convergence.

thresholdfloat

Threshold value to judge the convergence on FIC history.

is_ratioboolean, optional [default: False]

If True, the increase ratio between two consecutive FIC values is used instead of their absolute difference.

Returns:
is_stopboolean

Returns True when the FAB-iteration is considered to be converged.

fab.learner.operate_shrinkage module

fab.learner.operate_shrinkage.determine_shrink_comps(vposterior_prob, threshold, fisher_coeffs=1.0)

Determines shrink components.

Parameters:
vposterior_probnp.ndarray, size = (num_samples, num_comps)

Variational posterior matrix.

thresholdfloat

Threshold value. If the number of expected samples of a component is less than the threshold, it is judged to be shrunk.

fisher_coeffsfloat or np.ndarray, size = (num_samples, num_comps) or

size = (num_samples, num_comps, num_targets)

If 1.0 [default], the number of expected samples is simply applied. For using the scaled number of expected samples, a coefficient matrix to vposterior must be given.

Returns:
num_shrink_compsint

Number of component to be shrunk.

shrink_comp_masknp.array(bool), size = (num_comps)

Indicates whether each component is judged to be shrunk: If shrunk_mask[i] is True, the i-th component is shrinkable.

fab.learner.update_vposterior module

fab.learner.update_vposterior.calc_comp_log_regularizers(comps, vposterior_prob, fisher_coeffs=1.0)

Calculates logarithm of a regularization term of all component for updating the variational posterior distribution.

\[\frac{-D_{\phi_j} \Gamma_{\phi_{(n, j)}}^{(t-1)}} {2 \ \sum_{n = 1}^{N} q^{(t-1)}(\zeta_{j}^{n}) \Gamma_{\phi_{(n, j)}}^{(t-1)}}\]
Parameters:
compslist[FABComponent]

List of component objects. Parameter dimensionality of each component is referred to.

vposterior_probnp.ndarray, size = (num_samples, num_comps)

Variational posterior matrix.

fisher_coeffsfloat or np.ndarray, size = (num_samples, num_comps) or

size = (num_samples, num_comps, num_targets)

If 1.0 [default], the number of expected samples is simply applied to the regularize term calculation. For scaled component complexity, a coefficient matrix to vposterior must be given.

Returns:
log_regularizersnp.ndarray, size = (num_samples, num_comps)

The complexities of all component. For i-th component, refer to log_regularizers[:, i].

fab.learner.update_vposterior.calc_hme_lvprior_log_regularizers(lvprior, vposterior_on_gates, fisher_coeffs=1.0)

Calculates logarithm of a regularization term of all gates for updating the variational posterior distribution.

\[\sum_{i \in \mathcal{E}_j} \ \frac{-D_{\beta_i} \Gamma_{\beta_{(n, j)}}^{(t-1)}} {2 \ \sum_{n = 1}^{N} \sum_{j \in {g}_i} q^{(t-1)}(\zeta_{j}^{n}) \ \Gamma_{\beta_{(n, j)}}^{(t-1)}}\]
Parameters:
lvpriorHMEBinaryTreeLVPrior

Latent variable prior object. Gate-tree structure of the prior and parameter dimensionality of each gate are referred to.

vposterior_on_gatesnp.ndarray, size = (num_samples, num_gates)

Cumulated variational posterior matrix for all gates.

fisher_coeffsfloat or np.ndarray, size = (num_samples, num_gates)

If 1.0 [default], the number of expected samples is simply applied to the regularize term calculation. For scaled gate complexity, a coefficient matrix to vposterior must be given.

Returns:
log_regularizersnp.ndarray, size = (num_samples, num_comps)

The complexities of all gates. For i-th component, refer to log_regularizers[:, i].

Module contents