- 04 8月, 2023 1 次提交
-
-
由 Javier 提交于
-
- 03 8月, 2023 1 次提交
-
-
由 Javier 提交于
test passed. Need to increase a bit test coverage for the tabtransformer and attention_layers, and review the docs
-
- 02 8月, 2023 1 次提交
-
-
由 Javier 提交于
implemented linear and 'standard' attention in a functional way so they are available via parameters passed to the main multi head attention class
-
- 27 7月, 2023 1 次提交
-
-
由 Javier 提交于
Added scripts on how to use the library for recsys in response to issue #133. Also Added a simple/basic transformer model for the text component before integrating with HF. Also added the option of specify the dimension of the feed forward network
-
- 04 5月, 2023 1 次提交
-
-
由 Javier 提交于
-
- 23 3月, 2023 1 次提交
-
-
由 Javier 提交于
-
- 21 8月, 2022 1 次提交
-
-
由 Javier Rodriguez Zaurin 提交于
-
- 14 8月, 2022 1 次提交
-
-
由 Javier Rodriguez Zaurin 提交于
removed all partial imports and all import * with the aim of avoiding potential circular imports issues and increase readability. Fixed a couple bugs along the way
-
- 19 7月, 2022 1 次提交
-
-
由 Javier Rodriguez Zaurin 提交于
Manually bringing all the mkdocs changes to the self supervised branched as the conflicts were somehow too much to make it happen via sublime merge or vscode...I am at utils. After this we need to test callbacks for self supervised. Then the usual overall checks and install checks and make the massive release
-
- 06 7月, 2022 1 次提交
-
-
由 Javier Rodriguez Zaurin 提交于
-
- 07 5月, 2022 1 次提交
-
-
由 Javier Rodriguez Zaurin 提交于
Self Supervised Runs for all attention based models. Whether it learns something or not, is a question we still need to answer
-
- 04 5月, 2022 1 次提交
-
-
由 Javier Rodriguez Zaurin 提交于
Adjusted output and encoder_output dims so that all models can be used with self supervised training
-
- 10 3月, 2022 1 次提交
-
-
由 jrzaurin 提交于
-
- 05 3月, 2022 1 次提交
-
-
由 jrzaurin 提交于
-
- 20 1月, 2022 1 次提交
-
-
由 jrzaurin 提交于
-
- 03 1月, 2022 1 次提交
-
-
由 jrzaurin 提交于
-
- 31 12月, 2021 2 次提交
- 29 12月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 20 12月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 08 12月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 20 11月, 2021 1 次提交
-
-
由 jrzaurin 提交于
Against all good practices, this is a massive commit that adds an entire new module with Bayesian models (that are still not functional). Making them functional requires some work. Also the entire models module has been restrucured in preparation for better days to come
-
- 15 11月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 12 11月, 2021 1 次提交
-
-
由 jrzaurin 提交于
First commit towards v2. Re-organized the models module and added a few new functionalities for the models in there
-
- 07 9月, 2021 1 次提交
-
-
由 jrzaurin 提交于
Added docs. Re-design Additive Atttention accoring to their code. Add a few details on the docs and fix a typo
-
- 04 9月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 01 9月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 31 8月, 2021 1 次提交
-
-
由 jrzaurin 提交于
Added a proper implementation of transformer models when needed and re-organised the transformers module for clarity
-
- 29 8月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 28 8月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 23 8月, 2021 1 次提交
-
-
由 jrzaurin 提交于
First commit towards the perceiver. Changed modularity so that embeddings are consistent accross models. Adding the perceiver also forced me to change the structure of the TabTransformer and SAINT
-
- 11 8月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 09 8月, 2021 1 次提交
-
-
由 jrzaurin 提交于
Added cont norm options for all tabular models. Optimize cat embed for transformer models. Adjusted tab preprocessors for transformer models.
-
- 06 8月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 05 8月, 2021 1 次提交
-
-
由 jrzaurin 提交于
-
- 03 8月, 2021 2 次提交