Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Paper
•
1908.10084
•
Published
•
9
This is a sentence-transformers model finetuned from indobenchmark/indobert-base-p2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 200, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Waduk wadaslintang sebenarnya terbagi menjadi dua kabupaten yaitu kabupaten kebumen dan kabupaten wonosobo.',
'Kabupaten kebumen dan kabupaten wonosobo bertentaggaan.',
'Musim ini di ajang PBL 2020 Hendra melawan tim Pune 7 aces.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
sts-devEmbeddingSimilarityEvaluator| Metric | Value |
|---|---|
| pearson_cosine | -0.0516 |
| spearman_cosine | -0.0593 |
| pearson_manhattan | -0.0643 |
| spearman_manhattan | -0.066 |
| pearson_euclidean | -0.0637 |
| spearman_euclidean | -0.0653 |
| pearson_dot | -0.0279 |
| spearman_dot | -0.026 |
| pearson_max | -0.0279 |
| spearman_max | -0.026 |
sentence_0, sentence_1, and label| sentence_0 | sentence_1 | label | |
|---|---|---|---|
| type | string | string | int |
| details |
|
|
|
| sentence_0 | sentence_1 | label |
|---|---|---|
Pada tahun 1436, pulau Timor mempunyai 12 kota bandar namun tidak disebutkan namanya. |
Pulau Timor memiliki 10 kota bandar. |
2 |
Komoditas pertanian yang ada di desa ini antara lain: bunga potong, sayur mayur, waluh (lejet) terutama Paprika (Capsicum annuum L.). Komoditas ini menjadi sumber perekonomian utama di desa ini karena harganya yang lumayan dibandingkan sayuran lain. |
Komoditas pertanian di desa ini lebih mahal dibandingkan sayuran lain. |
1 |
Setelah batas waktu pencalonan pada tanggal 15 Juli 2003, sembilan kota telah mencalonkan diri untuk mengadakan Olimpiade 2012. Kota-kota tersebut adalah Havana, Istanbul, Leipzig, London, Madrid, Moskwa, New York City, Paris, dan Rio de Janeiro. Pada 18 Mei 2004, Komite Olimpiade Internasional (IOC), sebagai hasil penilaian teknis, mengurangi jumlah kota kandidat menjadi lima: London, Madrid, Moskwa, New York, dan Paris. |
Jumlah kota kandidat tuan rumah olimpide bertambah pada 18 Mei 2004. |
2 |
MultipleNegativesRankingLoss with these parameters:{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
eval_strategy: stepsper_device_train_batch_size: 32per_device_eval_batch_size: 32multi_dataset_batch_sampler: round_robinoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 32per_device_eval_batch_size: 32per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 3max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Falsehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseeval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falsebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robin| Epoch | Step | Training Loss | sts-dev_spearman_max |
|---|---|---|---|
| 0.0991 | 32 | - | -0.0592 |
| 0.1981 | 64 | - | -0.0425 |
| 0.2972 | 96 | - | -0.0467 |
| 0.3963 | 128 | - | -0.0428 |
| 0.4954 | 160 | - | -0.0512 |
| 0.5944 | 192 | - | -0.0473 |
| 0.6935 | 224 | - | -0.0412 |
| 0.7926 | 256 | - | -0.0435 |
| 0.8916 | 288 | - | -0.0405 |
| 0.9907 | 320 | - | -0.0425 |
| 1.0 | 323 | - | -0.0420 |
| 1.0898 | 352 | - | -0.0346 |
| 1.1889 | 384 | - | -0.0333 |
| 1.2879 | 416 | - | -0.0325 |
| 1.3870 | 448 | - | -0.0312 |
| 1.4861 | 480 | - | -0.0316 |
| 1.5480 | 500 | 0.077 | - |
| 1.5851 | 512 | - | -0.0260 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Base model
indobenchmark/indobert-base-p2