Spaces:
Runtime error
Runtime error
Update text.py
Browse files
text.py
CHANGED
|
@@ -8,14 +8,14 @@ sum_app_text_tab_1= """
|
|
| 8 |
|
| 9 |
"""
|
| 10 |
|
| 11 |
-
sum_app_text_tab_2= """
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
- Gabriel/
|
| 16 |
-
- Gabriel/
|
| 17 |
|
| 18 |
-
To see more in depth regarding the training go to
|
| 19 |
|
| 20 |
The core idea behind the training procedure is sequential adoption through transfer learning, i.e multiple phases for fine-tuning a pretrained model on different datasets. The figure below illustrates how the skill level of the model increases at each step:
|
| 21 |
|
|
|
|
| 8 |
|
| 9 |
"""
|
| 10 |
|
| 11 |
+
sum_app_text_tab_2= """ ## Abstractive vs Extractive
|
| 12 |
+
|
| 13 |
+
The underlying engines for the Abstractive part are transformer based model BART, a sequence-to-sequence model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. The BART-model was pre-trained by KBLab/bart-base-swedish-cased (link) to learn general knowledge about language. Afterwards, the model was further fine-tuned on two labelled datasets that have been open-sourced:
|
| 14 |
+
|
| 15 |
+
- [Gabriel/xsum_swe](https://huggingface.co/datasets/Gabriel/xsum_swe)
|
| 16 |
+
- [Gabriel/cnn_daily_swe](https://huggingface.co/datasets/Gabriel/cnn_daily_swe)
|
| 17 |
|
| 18 |
+
To see more in depth regarding the training go to model card: [Gabriel/bart-base-cnn-xsum-swe](https://huggingface.co/Gabriel/bart-base-cnn-xsum-swe).
|
| 19 |
|
| 20 |
The core idea behind the training procedure is sequential adoption through transfer learning, i.e multiple phases for fine-tuning a pretrained model on different datasets. The figure below illustrates how the skill level of the model increases at each step:
|
| 21 |
|