Title: | Learning Discrete Bayesian Network Classifiers from Data |
---|---|
Description: | State-of-the art algorithms for learning discrete Bayesian network classifiers from data, including a number of those described in Bielza & Larranaga (2014) <doi:10.1145/2576868>, with functions for prediction, model evaluation and inspection. |
Authors: | Mihaljevic Bojan [aut, cre, cph], Bielza Concha [aut], Larranaga Pedro [aut], Wickham Hadley [ctb] (some code extracted from memoise package) |
Maintainer: | Mihaljevic Bojan <[email protected]> |
License: | GPL (>= 2) |
Version: | 0.4.8 |
Built: | 2025-03-10 03:04:22 UTC |
Source: | https://github.com/bmihaljevic/bnclassify |
Compute predictive accuracy.
accuracy(x, y)
accuracy(x, y)
x |
A vector of predicted labels. |
y |
A vector of true labels. |
data(car) nb <- bnc('nb', 'class', car, smooth = 1) p <- predict(nb, car) accuracy(p, car$class)
data(car) nb <- bnc('nb', 'class', car, smooth = 1) p <- predict(nb, car) accuracy(p, car$class)
If there is a single predictor then returns a naive Bayes.
aode(class, dataset, features = NULL)
aode(class, dataset, features = NULL)
class |
A character. Name of the class variable. |
dataset |
The data frame from which to learn the classifier. |
features |
A character vector. The names of the features. This argument
is ignored if |
A bnc_aode
or a bnc_dag
(if returning a naive Bayes)
mlr
.Convert a bnc_bn
to a Learner
object.
as_mlr(x, dag, id = "1")
as_mlr(x, dag, id = "1")
x |
A |
dag |
A logical. Whether to learn structure on each training subsample. Parameters are always learned. |
id |
A character. |
data(car) nb <- bnc('nb', 'class', car, smooth = 1) ## Not run: library(mlr) ## Not run: nb_mlr <- as_mlr(nb, dag = FALSE, id = "ode_cl_aic") ## Not run: nb_mlr
data(car) nb <- bnc('nb', 'class', car, smooth = 1) ## Not run: library(mlr) ## Not run: nb_mlr <- as_mlr(nb, dag = FALSE, id = "ode_cl_aic") ## Not run: nb_mlr
A convenience function to learn the structure and parameters in a single
call. Must provide the name of the structure learning algorithm function;
see bnclassify
for the list.
bnc( dag_learner, class, dataset, smooth, dag_args = NULL, awnb_trees = NULL, awnb_bootstrap = NULL, manb_prior = NULL, wanbia = NULL )
bnc( dag_learner, class, dataset, smooth, dag_args = NULL, awnb_trees = NULL, awnb_bootstrap = NULL, manb_prior = NULL, wanbia = NULL )
dag_learner |
A character. Name of the structure learning function. |
class |
A character. Name of the class variable. |
dataset |
The data frame from which to learn network structure and parameters. |
smooth |
A numeric. The smoothing value ( |
dag_args |
A list. Optional additional arguments to |
awnb_trees |
An integer. The number ( |
awnb_bootstrap |
A numeric. The size of the bootstrap subsample,
relative to the size of |
manb_prior |
A numeric. The prior probability for an arc between the class and any feature. |
wanbia |
A logical. If |
data(car) nb <- bnc('nb', 'class', car, smooth = 1) nb_manb <- bnc('nb', 'class', car, smooth = 1, manb_prior = 0.3) ode_cl_aic <- bnc('tan_cl', 'class', car, smooth = 1, dag_args = list(score = 'aic'))
data(car) nb <- bnc('nb', 'class', car, smooth = 1) nb_manb <- bnc('nb', 'class', car, smooth = 1, manb_prior = 0.3) ode_cl_aic <- bnc('tan_cl', 'class', car, smooth = 1, dag_args = list(score = 'aic'))
A Bayesian network classifier with structure and parameters. Returned by
lp
and bnc
functions. You can use it to classify
data (with predict
). Can estimate its
predictive accuracy with cv
, plot its structure (with
plot
), print a summary to console
(print
), inspect it with functions documented
in inspect_bnc_bn
and inspect_bnc_dag
, and
convert it to mlr, grain, and graph objects –see as_mlr
and
grain_and_graph
.
data(car) tan <- bnc('tan_cl', 'class', car, smooth = 1) tan p <- predict(tan, car) head(p) ## Not run: plot(tan) nparams(tan)
data(car) tan <- bnc('tan_cl', 'class', car, smooth = 1) tan p <- predict(tan, car) head(p) ## Not run: plot(tan) nparams(tan)
A Bayesian network classifier structure, returned by functions such as
nb
and tan_cl
. You can plot its structure (with
plot
), print a summary to console
(print
), inspect it with functions documented
in inspect_bnc_dag
, and convert it to a graph object with
grain_and_graph
.
data(car) nb <- tan_cl('class', car) nb ## Not run: plot(nb) narcs(nb)
data(car) nb <- tan_cl('class', car) nb ## Not run: plot(nb) narcs(nb)
State-of-the-art algorithms for learning discrete Bayesian network classifiers from data, with functions prediction, model evaluation and inspection.
The learn more about the package, start with the vignettes:
browseVignettes(package = "bnclassify")
. The following is a list of available
functionalities:
Structure learning algorithms:
nb
: Naive Bayes (Minsky, 1961)
tan_cl
: Chow-Liu's algorithm for one-dependence estimators (CL-ODE) (Friedman et al., 1997)
fssj
: Forward sequential selection and joining (FSSJ) (Pazzani, 1996)
bsej
: Backward sequential elimination and joining (BSEJ) (Pazzani, 1996)
tan_hc
: Hill-climbing tree augmented naive Bayes (TAN-HC) (Keogh and Pazzani, 2002)
tan_hcsp
: Hill-climbing super-parent tree augmented naive Bayes (TAN-HCSP) (Keogh and Pazzani, 2002)
aode
: Averaged one-dependence estimators (AODE) (Webb et al., 2005)
Parameter learning methods (lp
):
Bayesian and maximum likelihood estimation
Weighting attributes to alleviate naive bayes' independence assumption (WANBIA) (Zaidi et al., 2013)
Attribute-weighted naive Bayes (AWNB) (Hall, 2007)
Model averaged naive Bayes (MANB) (Dash and Cooper, 2002)
Model evaluating:
cv
: Cross-validated estimate of accuracy
logLik
: Log-likelihood
AIC
: Akaike's information criterion (AIC)
BIC
: Bayesian information criterion (BIC)
Predicting:
predict
: Inference for complete and/or incomplete data (the latter through gRain
)
Inspecting models:
plot
: Structure plotting (through igraph
)
print
: Summary
params
: Access conditional probability tables
nparams
: Number of free parameters
and more. See inspect_bnc_dag
and inspect_bnc_bn
.
Maintainer: Mihaljevic Bojan [email protected] [copyright holder]
Authors:
Bielza Concha [email protected]
Larranaga Pedro [email protected]
Other contributors:
Wickham Hadley (some code extracted from memoise package) [contributor]
Bielza C and Larranaga P (2014), Discrete Bayesian network classifiers: A survey. ACM Computing Surveys, 47(1), Article 5.
Dash D and Cooper GF (2002). Exact model averaging with naive Bayesian classifiers. 19th International Conference on Machine Learning (ICML-2002), 91-98.
Friedman N, Geiger D and Goldszmidt M (1997). Bayesian network classifiers. Machine Learning, 29, pp. 131–163.
Zaidi NA, Cerquides J, Carman MJ, and Webb GI (2013) Alleviating naive Bayes attribute independence assumption by attribute weighting. Journal of Machine Learning Research, 14 pp. 1947–1988.
GI. Webb, JR Boughton, and Z Wang (2005) Not so naive bayes: Aggregating one-dependence estimators. Machine Learning, 58(1) pp. 5–24.
Hall M (2007). A decision tree-based attribute weighting filter for naive Bayes. Knowledge-Based Systems, 20(2), pp. 120-126.
Koegh E and Pazzani M (2002).Learning the structure of augmented Bayesian classifiers. In International Journal on Artificial Intelligence Tools, 11(4), pp. 587-601.
Koller D, Friedman N (2009). Probabilistic Graphical Models: Principles and Techniques. MIT Press.
Pazzani M (1996). Constructive induction of Cartesian product attributes. In Proceedings of the Information, Statistics and Induction in Science Conference (ISIS-1996), pp. 66-77
Useful links:
Report bugs at https://github.com/bmihaljevic/bnclassify/issues
Data set from the UCI repository: https://archive.ics.uci.edu/ml/datasets/Car+Evaluation.
A data.frame
with 7 columns and 1728 rows.
Computes the (conditional) mutual information between two variables. If
z
is not NULL
then returns the conditional mutual information,
. Otherwise, returns mutual information,
.
cmi(x, y, dataset, z = NULL, unit = "log")
cmi(x, y, dataset, z = NULL, unit = "log")
x |
A length one character. |
y |
A length one character. |
dataset |
A data frame. Must contain x, y and, optionally, z columns. |
z |
A character vector. |
unit |
A character. Logarithm base. See |
, where
is
Shannon's entropy.
data(car) cmi('maint', 'class', car)
data(car) cmi('maint', 'class', car)
Estimate predictive accuracy of a classifier with stratified cross
validation. It learns the models from the training subsamples by repeating
the learning procedures used to obtain x
. It can keep the network
structure fixed and re-learn only the parameters, or re-learn both structure
and parameters.
cv(x, dataset, k, dag = TRUE, mean = TRUE)
cv(x, dataset, k, dag = TRUE, mean = TRUE)
x |
List of |
dataset |
The data frame on which to evaluate the classifiers. |
k |
An integer. The number of folds. |
dag |
A logical. Whether to learn structure on each training subsample. Parameters are always learned. |
mean |
A logical. Whether to return mean accuracy for each classifier or to return a k-row matrix with accuracies per fold. |
A numeric vector of same length as x
, giving the predictive
accuracy of each classifier. If mean = FALSE
then a matrix with k
rows and a column per each classifier in x
.
data(car) nb <- bnc('nb', 'class', car, smooth = 1) # CV a single classifier cv(nb, car, k = 10) nb_manb <- bnc('nb', 'class', car, smooth = 1, manb_prior = 0.5) cv(list(nb=nb, manb=nb_manb), car, k = 10) # Get accuracies on each fold cv(list(nb=nb, manb=nb_manb), car, k = 10, mean = FALSE) ode <- bnc('tan_cl', 'class', car, smooth = 1, dag_args = list(score = 'aic')) # keep structure fixed accross training subsamples cv(ode, car, k = 10, dag = FALSE)
data(car) nb <- bnc('nb', 'class', car, smooth = 1) # CV a single classifier cv(nb, car, k = 10) nb_manb <- bnc('nb', 'class', car, smooth = 1, manb_prior = 0.5) cv(list(nb=nb, manb=nb_manb), car, k = 10) # Get accuracies on each fold cv(list(nb=nb, manb=nb_manb), car, k = 10, mean = FALSE) ode <- bnc('tan_cl', 'class', car, smooth = 1, dag_args = list(score = 'aic')) # keep structure fixed accross training subsamples cv(ode, car, k = 10, dag = FALSE)
Convert a bnc_dag
to igraph
and
grain
objects.
as_igraph(x) as_grain(x)
as_igraph(x) as_grain(x)
x |
The |
as_igraph()
: Convert to a graphNEL.
as_grain()
: Convert to a grain.
data(car) nb <- bnc('nb', 'class', car, smooth = 1) # Requires the grain and igraph packages installed ## Not run: g <- as_grain(nb) ## Not run: gRain::querygrain.grain(g)$buying
data(car) nb <- bnc('nb', 'class', car, smooth = 1) # Requires the grain and igraph packages installed ## Not run: g <- as_grain(nb) ## Not run: gRain::querygrain.grain(g)$buying
Greedy wrapper algorithms for learning Bayesian network classifiers. All algorithms use cross-validated estimate of predictive accuracy to evaluate candidate structures.
fssj(class, dataset, k, epsilon = 0.01, smooth = 0, cache_reset = NULL) bsej(class, dataset, k, epsilon = 0.01, smooth = 0, cache_reset = NULL) tan_hc(class, dataset, k, epsilon = 0.01, smooth = 0, cache_reset = NULL) kdb( class, dataset, k, kdbk = 2, epsilon = 0.01, smooth = 0, cache_reset = NULL ) tan_hcsp(class, dataset, k, epsilon = 0.01, smooth = 0, cache_reset = NULL)
fssj(class, dataset, k, epsilon = 0.01, smooth = 0, cache_reset = NULL) bsej(class, dataset, k, epsilon = 0.01, smooth = 0, cache_reset = NULL) tan_hc(class, dataset, k, epsilon = 0.01, smooth = 0, cache_reset = NULL) kdb( class, dataset, k, kdbk = 2, epsilon = 0.01, smooth = 0, cache_reset = NULL ) tan_hcsp(class, dataset, k, epsilon = 0.01, smooth = 0, cache_reset = NULL)
class |
A character. Name of the class variable. |
dataset |
The data frame from which to learn the classifier. |
k |
An integer. The number of folds. |
epsilon |
A numeric. Minimum absolute improvement in accuracy required to keep searching. |
smooth |
A numeric. The smoothing value ( |
cache_reset |
A numeric. Number of iterations after which to reset the
cache of conditional probability tables. A small number reduces the amount
of memory used. |
kdbk |
An integer. The maximum number of feature parents per feature. |
A bnc_dag
object.
Pazzani M (1996). Constructive induction of Cartesian product attributes. In Proceedings of the Information, Statistics and Induction in Science Conference (ISIS-1996), pp. 66-77
Koegh E and Pazzani M (2002).Learning the structure of augmented Bayesian classifiers. In International Journal on Artificial Intelligence Tools, 11(4), pp. 587-601.
data(car) tanhc <- tan_hc('class', car, k = 5, epsilon = 0) ## Not run: plot(tanhc)
data(car) tanhc <- tan_hc('class', car, k = 5, epsilon = 0) ## Not run: plot(tanhc)
Functions for inspecting a bnc_bn
object. In addition, you can
query this object with the functions documented in
inspect_bnc_dag
.
nparams(x) manb_arc_posterior(x) awnb_weights(x) params(x) values(x) classes(x)
nparams(x) manb_arc_posterior(x) awnb_weights(x) params(x) values(x) classes(x)
x |
The |
nparams()
: Returns the number of free parameters in the model.
manb_arc_posterior()
: Returns the posterior of each arc from the class
according to the MANB method.
awnb_weights()
: Returns the AWNB feature weights.
params()
: Returns the list of CPTs, in the same order as vars
.
values()
: Returns the possible values of each variable, in the same order as vars
.
classes()
: Returns the possible values of the class variable.
data(car) nb <- bnc('nb', 'class', car, smooth = 1) nparams(nb) nb <- bnc('nb', 'class', car, smooth = 1, manb_prior = 0.5) manb_arc_posterior(nb) nb <- bnc('nb', 'class', car, smooth = 1, awnb_bootstrap = 0.5) awnb_weights(nb)
data(car) nb <- bnc('nb', 'class', car, smooth = 1) nparams(nb) nb <- bnc('nb', 'class', car, smooth = 1, manb_prior = 0.5) manb_arc_posterior(nb) nb <- bnc('nb', 'class', car, smooth = 1, awnb_bootstrap = 0.5) awnb_weights(nb)
Functions for inspecting a bnc_dag
object.
class_var(x) features(x) vars(x) families(x) modelstring(x) feature_families(x) narcs(x) is_semi_naive(x) is_anb(x) is_nb(x) is_ode(x)
class_var(x) features(x) vars(x) families(x) modelstring(x) feature_families(x) narcs(x) is_semi_naive(x) is_anb(x) is_nb(x) is_ode(x)
x |
The |
class_var()
: Returns the class variable.
features()
: Returns the features.
vars()
: Returns all variables (i.e., features + class).
families()
: Returns the family of each variable.
modelstring()
: Returns the model string of the network in bnlearn format (adding a space in between two families).
feature_families()
: Returns the family of each feature.
narcs()
: Returns the number of arcs.
is_semi_naive()
: Returns TRUE if x
is a semi-naive Bayes.
is_anb()
: Returns TRUE if x
is an augmented naive Bayes.
is_nb()
: Returns TRUE if x
is a naive Bayes.
is_ode()
: Returns TRUE if x
is a one-dependence estimator.
data(car) nb <- bnc('nb', 'class', car, smooth = 1) narcs(nb) is_ode(nb)
data(car) nb <- bnc('nb', 'class', car, smooth = 1) narcs(nb) is_ode(nb)
Learn parameters with maximum likelihood or Bayesian estimation, the
weighting attributes to alleviate naive bayes' independence assumption (WANBIA),
attribute weighted naive Bayes (AWNB), or the model averaged naive Bayes
(MANB) methods. Returns a bnc_bn
.
lp( x, dataset, smooth, awnb_trees = NULL, awnb_bootstrap = NULL, manb_prior = NULL, wanbia = NULL )
lp( x, dataset, smooth, awnb_trees = NULL, awnb_bootstrap = NULL, manb_prior = NULL, wanbia = NULL )
x |
The |
dataset |
The data frame from which to learn network parameters. |
smooth |
A numeric. The smoothing value ( |
awnb_trees |
An integer. The number ( |
awnb_bootstrap |
A numeric. The size of the bootstrap subsample,
relative to the size of |
manb_prior |
A numeric. The prior probability for an arc between the class and any feature. |
wanbia |
A logical. If |
lp
learns the parameters of each local distribution as
where
is the number of instances in
dataset
in which
and
,
,
is the cardinality of
, and all
hyperparameters of the Dirichlet prior equal to
.
corresponds to maximum likelihood estimation. Returns a uniform
distribution when
. With partially observed data, the above amounts to
available case analysis.
WANBIA learns a unique exponent 'weight' per feature. They are
computed by optimizing conditional log-likelihood, and are bounded with
all . For WANBIA estimates, set
wanbia
to TRUE
.
In order to get the AWNB parameter estimate, provide either the
awnb_bootstrap
and/or the awnb_trees
argument. The estimate is:
while the weights are
computed as
where is the number of
bootstrap samples from
dataset
and the minimum
testing depth of
in an unpruned classification tree learned
from the
-th subsample (
if
is omitted from
-th tree).
The MANB parameters correspond to Bayesian model averaging over the naive
Bayes models obtained from all subsets over the
features. To get MANB parameters, provide the
manb_prior
argument.
A bnc_bn
object.
Hall M (2004). A decision tree-based attribute weighting filter for naive Bayes. Knowledge-based Systems, 20(2), 120-126.
Dash D and Cooper GF (2002). Exact model averaging with naive Bayesian classifiers. 19th International Conference on Machine Learning (ICML-2002), 91-98.
Pigott T D (2001) A review of methods for missing data. Educational research and evaluation, 7(4), 353-383.
data(car) nb <- nb('class', car) # Maximum likelihood estimation mle <- lp(nb, car, smooth = 0) # Bayesian estimaion bayes <- lp(nb, car, smooth = 0.5) # MANB manb <- lp(nb, car, smooth = 0.5, manb_prior = 0.5) # AWNB awnb <- lp(nb, car, smooth = 0.5, awnb_trees = 10)
data(car) nb <- nb('class', car) # Maximum likelihood estimation mle <- lp(nb, car, smooth = 0) # Bayesian estimaion bayes <- lp(nb, car, smooth = 0.5) # MANB manb <- lp(nb, car, smooth = 0.5, manb_prior = 0.5) # AWNB awnb <- lp(nb, car, smooth = 0.5, awnb_trees = 10)
Compute (penalized) log-likelihood and conditional log-likelihood score of a bnc_bn
object on
a data set. Requires a data frame argument in addition to object
.
## S3 method for class 'bnc_bn' AIC(object, ...) ## S3 method for class 'bnc_bn' BIC(object, ...) ## S3 method for class 'bnc_bn' logLik(object, ...) cLogLik(object, ...)
## S3 method for class 'bnc_bn' AIC(object, ...) ## S3 method for class 'bnc_bn' BIC(object, ...) ## S3 method for class 'bnc_bn' logLik(object, ...) cLogLik(object, ...)
object |
A |
... |
A data frame ( |
log-likelihood = ,
Akaike's information criterion (AIC) = ,
The Bayesian information criterion (BIC) score: = ,
where is the number of free parameters in
object
,
is the data set and N is the number of instances in
.
cLogLik
computes the conditional log-likelihood of the model.
data(car) nb <- bnc('nb', 'class', car, smooth = 1) logLik(nb, car) AIC(nb, car) BIC(nb, car) cLogLik(nb, car)
data(car) nb <- bnc('nb', 'class', car, smooth = 1) logLik(nb, car) AIC(nb, car) BIC(nb, car) cLogLik(nb, car)
Learn a naive Bayes network structure.
nb(class, dataset = NULL, features = NULL)
nb(class, dataset = NULL, features = NULL)
class |
A character. Name of the class variable. |
dataset |
The data frame from which to learn the classifier. |
features |
A character vector. The names of the features. This argument
is ignored if |
A bnc_dag
object.
data(car) nb <- nb('class', car) nb2 <- nb('class', features = letters[1:10]) ## Not run: plot(nb2)
data(car) nb <- nb('class', car) nb2 <- nb('class', features = letters[1:10]) ## Not run: plot(nb2)
If node labels are to small to be viewed properly, you may fix label fontsize with argument fontsize. Also, you may try multiple different layouts.
## S3 method for class 'bnc_dag' plot(x, y, layoutType = "dot", fontsize = NULL, ...)
## S3 method for class 'bnc_dag' plot(x, y, layoutType = "dot", fontsize = NULL, ...)
x |
The |
y |
Not used |
layoutType |
a character. Optional. |
fontsize |
integer Font size for node labels. Optional. |
... |
Not used. |
# Requires the igraph package to be installed. data(car) nb <- nb('class', car) nb <- nb('class', car) ## Not run: plot(nb) ## Not run: plot(nb, fontsize = 20) ## Not run: plot(nb, layoutType = 'circo') ## Not run: plot(nb, layoutType = 'fdp') ## Not run: plot(nb, layoutType = 'osage') ## Not run: plot(nb, layoutType = 'twopi') ## Not run: plot(nb, layoutType = 'neato')
# Requires the igraph package to be installed. data(car) nb <- nb('class', car) nb <- nb('class', car) ## Not run: plot(nb) ## Not run: plot(nb, fontsize = 20) ## Not run: plot(nb, layoutType = 'circo') ## Not run: plot(nb, layoutType = 'fdp') ## Not run: plot(nb, layoutType = 'osage') ## Not run: plot(nb, layoutType = 'twopi') ## Not run: plot(nb, layoutType = 'neato')
Predicts class labels or class posterior probability distributions.
## S3 method for class 'bnc_fit' predict(object, newdata, prob = FALSE, ...)
## S3 method for class 'bnc_fit' predict(object, newdata, prob = FALSE, ...)
object |
A |
newdata |
A data frame containing observations whose class has to be predicted. |
prob |
A logical. Whether class posterior probability should be returned. |
... |
Ignored. |
Ties are resolved randomly. Inference is much slower if
newdata
contains NA
s.
If prob=FALSE
, then returns a length- factor with the
same levels as the class variable in
x
, where is the number
of rows in
newdata
. Each element is the most likely
class for the corresponding row in newdata
. If prob=TRUE
,
returns a by
numeric matrix, where
is the number of
classes; each row corresponds to the class posterior of the instance.
data(car) nb <- bnc('nb', 'class', car, smooth = 1) p <- predict(nb, car) head(p) p <- predict(nb, car, prob = TRUE) head(p)
data(car) nb <- bnc('nb', 'class', car, smooth = 1) p <- predict(nb, car) head(p) p <- predict(nb, car, prob = TRUE) head(p)
Learns a one-dependence Bayesian classifier using Chow-Liu's algorithm, by maximizing either log-likelihood, the AIC or BIC scores; maximizing log-likelihood corresponds to the well-known tree augmented naive Bayes (Friedman et al., 1997). When maximizing AIC or BIC the output might be a forest-augmented rather than a tree-augmented naive Bayes.
tan_cl(class, dataset, score = "loglik", root = NULL)
tan_cl(class, dataset, score = "loglik", root = NULL)
class |
A character. Name of the class variable. |
dataset |
The data frame from which to learn the classifier. |
score |
A character. The score to be maximized. |
root |
A character. The feature to be used as root of the augmenting tree. Only one feature can be supplied, even in case of an augmenting forest. This argument is optional. |
A bnc_dag
object.
Friedman N, Geiger D and Goldszmidt M (1997). Bayesian network classifiers. Machine Learning, 29, pp. 131–163.
data(car) ll <- tan_cl('class', car, score = 'loglik') ## Not run: plot(ll) ll <- tan_cl('class', car, score = 'loglik', root = 'maint') ## Not run: plot(ll) aic <- tan_cl('class', car, score = 'aic') bic <- tan_cl('class', car, score = 'bic')
data(car) ll <- tan_cl('class', car, score = 'loglik') ## Not run: plot(ll) ll <- tan_cl('class', car, score = 'loglik', root = 'maint') ## Not run: plot(ll) aic <- tan_cl('class', car, score = 'aic') bic <- tan_cl('class', car, score = 'bic')
Data set from the UCI repository https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records.
A data.frame
with 17 columns and 435 rows.