Update README.md
Browse files
README.md
CHANGED
|
@@ -14,7 +14,8 @@ pipeline_tag: any-to-any
|
|
| 14 |
Multi-modal Variational Autoencoder for text embedding transformation using geometric fusion.
|
| 15 |
|
| 16 |
This first version is essentialy clip_l + t5-base. Similar to those shunt prototypes in concept but entirely divergent in this implementation. This variation is formatted and trained specifically as a VAE to encode/decode pairs of encodings together.
|
| 17 |
-
Cantor cross-attention allows a form high-density sparse containment, which when implemented correctly is a highly efficient global attention mechanism to ensure solidity.
|
|
|
|
| 18 |
|
| 19 |
The current implementation is trained with only a handful of token sequences, so it's essentially front-loaded. Expect short sequences to work along with many longer squences.
|
| 20 |
Full-sequence pretraining will begin soon with a uniform vocabulary that takes both tokens in for a representative uniform token based on the position.
|
|
|
|
| 14 |
Multi-modal Variational Autoencoder for text embedding transformation using geometric fusion.
|
| 15 |
|
| 16 |
This first version is essentialy clip_l + t5-base. Similar to those shunt prototypes in concept but entirely divergent in this implementation. This variation is formatted and trained specifically as a VAE to encode/decode pairs of encodings together.
|
| 17 |
+
Cantor cross-attention allows a form of high-density sparse containment, which when implemented correctly is a highly efficient global attention mechanism to ensure solidity.
|
| 18 |
+
Fractal modalities make this possible due to sparsity gaps and learned encoding pattern point encodings matching a series of math rules that make this possible.
|
| 19 |
|
| 20 |
The current implementation is trained with only a handful of token sequences, so it's essentially front-loaded. Expect short sequences to work along with many longer squences.
|
| 21 |
Full-sequence pretraining will begin soon with a uniform vocabulary that takes both tokens in for a representative uniform token based on the position.
|