entmax15() and sparsemax15()
mask_types. Add an optional mask_topk config
parameter. (#180)optimizernow default to the
torch_ignite_adam when available. Result is 30% faster
pretraining and fitting tasks (#178).nn_aum_loss() function for area under the \(Min(FPR,FNR)\) optimization for cases of
unbalanced binary classification (#178).nn_aum_loss() (#178).nn_unsupervised_loss() is now a proper loss
function.tabet_pretrain failing with
value_error("Can't convert data of class: 'NULL'") in R
4.5tabet_pretrain wrongly used instead of
tabnet_fit in Missing data predictor vignetteworkflows::add_case_weights() parameters (#151)tabnet_model and
from_epoch parameters (#143)tune::finalize_workflow() test to {parsnip} v1.2
breaking change. (#155)autoplot() now position the “has_checkpoint” points
correctly when a tabnet_fit() is continuing a previous
training using tabnet_model =. (#150)tabnet_model option will not be
used in tabnet_pretrain() tasks. (#150)Node dataset. (#126)tabnet_pretrain() now allows different GLU blocks in
GLU layers in encoder and in decoder through the config()
parameters num_idependant_decoder and
num_shared_decoder (#129)reduce_on_plateau as option for
lr_scheduler at tabnet_config() (@SvenVw, #120)autoplot.tabnet_fit() (#67)tabnet_pretrain() now allows missing values in
predictors. (#68)tabnet_explain() now works for
tabnet_pretrain models. (#68)random_obfuscator() torch_nn
module. (#68)tabnet_fit() and predict() now allow
missing values in predictors. (#76)tabnet_config() now supports a
num_workers= parameters to control parallel dataloading
(#83)tabnet_config() now has a flag
skip_importance to skip calculating feature importance
(@egillax, #91)tabnet_nnmin_grid.tabnet method for tune
(@cphaarmeyer,
#107)tabnet_explain() method for parsnip models (@cphaarmeyer,
#108)tabnet_fit() and predict() now allow
multi-outcome, all numeric or all factors but not
mixed. (#118)tabnet_explain() is now correctly handling missing
values in predictors. (#77)dataloader can now use num_workers>0
(#83)batch_size and
virtual_batch_size improves performance on mid-range
devices.engine="torch" to tabnet parsnip model
(#114)autoplot() warnings turned into errors with
{ggplot2} v3.4 (#113)update method for tabnet models to allow the
correct usage of finalize_workflow (#60).tabnet_fit() (@cregouby, #26)tabnet_explain().tabnet_pretrain() for unsupervised pretraining
(@cregouby,
#29)autoplot() of model loss among epochs (@cregouby, #36)config argument to
fit() / pretrain() so one can pass a pre-made config list.
(#42)tabnet_config(), new mask_type option
with entmax additional to default sparsemax
(@cmcmaster1,
#48)tabnet_config(), loss now also takes
function (@cregouby,
#55)NEWS.md file to track changes to the
package.
Need a high-speed mirror for your open-source project?
Contact our mirror admin team at info@clientvps.com.
This archive is provided as a free public service to the community.
Proudly supported by infrastructure from VPSPulse , RxServers , BuyNumber , UnitVPS , OffshoreName and secure payment technology by ArionPay.