- 03 1月, 2022 1 次提交
-
-
由 jrzaurin 提交于
-
- 31 12月, 2021 2 次提交
- 30 12月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 29 12月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 28 12月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 20 12月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 19 12月, 2021 1 次提交
-
-
由 jrzaurin 提交于
Cleaned and refactored the code for the attentive mlps. Adjusted unit tests. Docs build. Ready to move to the Bayesian mlp
-
- 11 12月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 09 12月, 2021 1 次提交
-
-
由 jrzaurin 提交于
unit tested tabmlp, tabresnet, tabnet. Left are the transformers, widedeep and new unit tests for the attentive mlp
-
- 08 12月, 2021 2 次提交
- 06 12月, 2021 1 次提交
-
-
由 jrzaurin 提交于
Fix some minor type and style issues. Adjusted the ZILN Loss so it works with former versions of pytorch and rename quantile regression multilabel label as qregression
-
- 29 11月, 2021 1 次提交
-
-
由 Pavol Mulinka 提交于
-
- 27 11月, 2021 1 次提交
-
-
由 Pavol Mulinka 提交于
-
- 23 11月, 2021 2 次提交
- 20 11月, 2021 1 次提交
-
-
由 jrzaurin 提交于
Against all good practices, this is a massive commit that adds an entire new module with Bayesian models (that are still not functional). Making them functional requires some work. Also the entire models module has been restrucured in preparation for better days to come
-
- 15 11月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 12 11月, 2021 1 次提交
-
-
由 jrzaurin 提交于
First commit towards v2. Re-organized the models module and added a few new functionalities for the models in there
-
- 07 10月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 07 9月, 2021 1 次提交
-
-
由 jrzaurin 提交于
Added docs. Re-design Additive Atttention accoring to their code. Add a few details on the docs and fix a typo
-
- 04 9月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 01 9月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 31 8月, 2021 1 次提交
-
-
由 jrzaurin 提交于
Added a proper implementation of transformer models when needed and re-organised the transformers module for clarity
-
- 29 8月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 28 8月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 24 8月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 23 8月, 2021 1 次提交
-
-
由 jrzaurin 提交于
First commit towards the perceiver. Changed modularity so that embeddings are consistent accross models. Adding the perceiver also forced me to change the structure of the TabTransformer and SAINT
-
- 11 8月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 09 8月, 2021 1 次提交
-
-
由 jrzaurin 提交于
Added cont norm options for all tabular models. Optimize cat embed for transformer models. Adjusted tab preprocessors for transformer models.
-
- 06 8月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 05 8月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 03 8月, 2021 2 次提交
- 21 6月, 2021 1 次提交
-
-
由 jrzaurin 提交于
Refined documentation. Added some test for saving model. Added an example notebook. Ready to test installations and publish v1 to pypi
-
- 19 6月, 2021 1 次提交
-
-
由 jrzaurin 提交于
Added test for new trainer methods. Back to documentation style more in line with some popular packages like pytorch
-
- 23 5月, 2021 1 次提交
-
-
由 jrzaurin 提交于
added get and setstate methods in the EarlyStopping and ModelCheckpoint callbacks. Added the possibility of using GRUs in the deeptext component and also predict using the hidden state or the output. Fixed a small bug in the text processor. Improved the save method in the Trainer
-
- 30 4月, 2021 1 次提交
-
-
由 jrzaurin 提交于
ModelCheckpoint saves best epoch as well. Added dropout option for tabnet. Adjusted RAdam for new signatures. Adjusted the training so it can take ReduceLROnPlateau. Also so that it automatically restores the best weights after training
-
- 09 4月, 2021 1 次提交
-
-
由 jrzaurin 提交于
Fixed a documentation error. For the tabtransformer the input_embed is a list with tuples of 2 elements, not 3
-