SebastianBodza commited on
Commit
ee94314
·
verified ·
1 Parent(s): c718983

Upload 10 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer_config.json filter=lfs diff=lfs merge=lfs -text
37
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
added_tokens.json ADDED
The diff for this file is too large to render. See raw diff
 
config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "SebastianBodza/SmolKartoffel-135M-v0.1",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "bos_token_id": 1,
9
+ "eos_token_id": 2,
10
+ "head_dim": 64,
11
+ "hidden_act": "silu",
12
+ "hidden_size": 576,
13
+ "initializer_range": 0.041666666666666664,
14
+ "intermediate_size": 1536,
15
+ "is_llama_config": true,
16
+ "max_position_embeddings": 8192,
17
+ "mlp_bias": false,
18
+ "model_type": "llama",
19
+ "num_attention_heads": 9,
20
+ "num_hidden_layers": 30,
21
+ "num_key_value_heads": 3,
22
+ "pad_token_id": 2,
23
+ "pretraining_tp": 1,
24
+ "rms_norm_eps": 1e-05,
25
+ "rope_interleaved": false,
26
+ "rope_scaling": null,
27
+ "rope_theta": 100000,
28
+ "tie_word_embeddings": true,
29
+ "torch_dtype": "bfloat16",
30
+ "transformers.js_config": {
31
+ "kv_cache_dtype": {
32
+ "fp16": "float16",
33
+ "q4f16": "float16"
34
+ }
35
+ },
36
+ "transformers_version": "4.49.0",
37
+ "use_cache": false,
38
+ "vocab_size": 114696
39
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "pad_token_id": 2,
6
+ "transformers_version": "4.49.0"
7
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3adf4f2636b40f879f9a54364e78c0f3014c075f12c7975a36ea9abdc9a8741
3
+ size 344567352
special_tokens_map.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>"
5
+ ],
6
+ "bos_token": {
7
+ "content": "<|im_start|>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false
12
+ },
13
+ "eos_token": {
14
+ "content": "<|im_end|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ },
20
+ "pad_token": "<|im_end|>",
21
+ "unk_token": {
22
+ "content": "<|endoftext|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false
27
+ }
28
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33ef257c609ccb5087c3a96e44965dc71978cddf1ec2ddbc9dc9b7c9832f35c4
3
+ size 15783253
tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f217094602cf3ab2008f28eaf7d8b755c8ad06b9eec0b520f15ce4c54734a4da
3
+ size 11608819
trainer_state.json ADDED
@@ -0,0 +1,1333 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.16,
5
+ "eval_steps": 500,
6
+ "global_step": 100,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "clip_ratio": 0.0,
13
+ "completion_length": 1947.125,
14
+ "epoch": 0.0016,
15
+ "grad_norm": 0.55078125,
16
+ "kl": 0.0,
17
+ "learning_rate": 6.25e-07,
18
+ "loss": 0.0,
19
+ "reward": 0.7148577496409416,
20
+ "reward_std": 0.2588387345895171,
21
+ "rewards/wer_reward_func": 0.7148577496409416,
22
+ "step": 1
23
+ },
24
+ {
25
+ "clip_ratio": 0.0,
26
+ "completion_length": 1946.5,
27
+ "epoch": 0.0032,
28
+ "grad_norm": 0.55078125,
29
+ "kl": 0.0,
30
+ "learning_rate": 1.25e-06,
31
+ "loss": -0.0,
32
+ "reward": 0.7822023555636406,
33
+ "reward_std": 0.24099930934607983,
34
+ "rewards/wer_reward_func": 0.7822023555636406,
35
+ "step": 2
36
+ },
37
+ {
38
+ "clip_ratio": 0.0,
39
+ "completion_length": 1951.5,
40
+ "epoch": 0.0048,
41
+ "grad_norm": 1.015625,
42
+ "kl": 0.016162421787157655,
43
+ "learning_rate": 1.8750000000000003e-06,
44
+ "loss": 0.0006,
45
+ "reward": 0.7216365113854408,
46
+ "reward_std": 0.13873484916985035,
47
+ "rewards/wer_reward_func": 0.7216365113854408,
48
+ "step": 3
49
+ },
50
+ {
51
+ "clip_ratio": 0.0,
52
+ "completion_length": 1932.75,
53
+ "epoch": 0.0064,
54
+ "grad_norm": 0.6640625,
55
+ "kl": 0.015228205360472202,
56
+ "learning_rate": 2.5e-06,
57
+ "loss": 0.0006,
58
+ "reward": 0.811914250254631,
59
+ "reward_std": 0.10335290990769863,
60
+ "rewards/wer_reward_func": 0.811914250254631,
61
+ "step": 4
62
+ },
63
+ {
64
+ "clip_ratio": 0.0,
65
+ "completion_length": 1958.625,
66
+ "epoch": 0.008,
67
+ "grad_norm": 0.57421875,
68
+ "kl": 0.016641404596157372,
69
+ "learning_rate": 3.125e-06,
70
+ "loss": 0.0007,
71
+ "reward": 0.6705156937241554,
72
+ "reward_std": 0.14204794133547693,
73
+ "rewards/wer_reward_func": 0.6705156937241554,
74
+ "step": 5
75
+ },
76
+ {
77
+ "clip_ratio": 0.0,
78
+ "completion_length": 1967.0,
79
+ "epoch": 0.0096,
80
+ "grad_norm": 0.65234375,
81
+ "kl": 0.016110880533233285,
82
+ "learning_rate": 3.7500000000000005e-06,
83
+ "loss": 0.0006,
84
+ "reward": 0.7191379070281982,
85
+ "reward_std": 0.1857276821974665,
86
+ "rewards/wer_reward_func": 0.7191379070281982,
87
+ "step": 6
88
+ },
89
+ {
90
+ "clip_ratio": 0.0,
91
+ "completion_length": 1947.375,
92
+ "epoch": 0.0112,
93
+ "grad_norm": 0.91796875,
94
+ "kl": 0.01589326758403331,
95
+ "learning_rate": 4.3750000000000005e-06,
96
+ "loss": 0.0006,
97
+ "reward": 0.7357326671481133,
98
+ "reward_std": 0.1887976780999452,
99
+ "rewards/wer_reward_func": 0.7357326671481133,
100
+ "step": 7
101
+ },
102
+ {
103
+ "clip_ratio": 0.0,
104
+ "completion_length": 1956.0,
105
+ "epoch": 0.0128,
106
+ "grad_norm": 0.8125,
107
+ "kl": 0.01580773969180882,
108
+ "learning_rate": 5e-06,
109
+ "loss": 0.0006,
110
+ "reward": 0.6810178197920322,
111
+ "reward_std": 0.24038350896444172,
112
+ "rewards/wer_reward_func": 0.6810178197920322,
113
+ "step": 8
114
+ },
115
+ {
116
+ "clip_ratio": 0.0,
117
+ "completion_length": 1939.875,
118
+ "epoch": 0.0144,
119
+ "grad_norm": 0.61328125,
120
+ "kl": 0.016684093279764056,
121
+ "learning_rate": 5.625e-06,
122
+ "loss": 0.0007,
123
+ "reward": 0.7145852670073509,
124
+ "reward_std": 0.16899098921567202,
125
+ "rewards/wer_reward_func": 0.7145852670073509,
126
+ "step": 9
127
+ },
128
+ {
129
+ "clip_ratio": 0.0,
130
+ "completion_length": 1938.25,
131
+ "epoch": 0.016,
132
+ "grad_norm": 0.66015625,
133
+ "kl": 0.016166918678209186,
134
+ "learning_rate": 6.25e-06,
135
+ "loss": 0.0006,
136
+ "reward": 0.7534582614898682,
137
+ "reward_std": 0.19622210646048188,
138
+ "rewards/wer_reward_func": 0.7534582614898682,
139
+ "step": 10
140
+ },
141
+ {
142
+ "clip_ratio": 0.0,
143
+ "completion_length": 1941.625,
144
+ "epoch": 0.0176,
145
+ "grad_norm": 0.62109375,
146
+ "kl": 0.015991921653039753,
147
+ "learning_rate": 6.875e-06,
148
+ "loss": 0.0006,
149
+ "reward": 0.7752574235200882,
150
+ "reward_std": 0.18537394842132926,
151
+ "rewards/wer_reward_func": 0.7752574235200882,
152
+ "step": 11
153
+ },
154
+ {
155
+ "clip_ratio": 0.0,
156
+ "completion_length": 1938.5,
157
+ "epoch": 0.0192,
158
+ "grad_norm": 0.50390625,
159
+ "kl": 0.015584928914904594,
160
+ "learning_rate": 7.500000000000001e-06,
161
+ "loss": 0.0006,
162
+ "reward": 0.7797695994377136,
163
+ "reward_std": 0.11821027041878551,
164
+ "rewards/wer_reward_func": 0.7797695994377136,
165
+ "step": 12
166
+ },
167
+ {
168
+ "clip_ratio": 0.0,
169
+ "completion_length": 1934.75,
170
+ "epoch": 0.0208,
171
+ "grad_norm": 0.515625,
172
+ "kl": 0.01588472374714911,
173
+ "learning_rate": 8.125000000000001e-06,
174
+ "loss": 0.0006,
175
+ "reward": 0.7346231155097485,
176
+ "reward_std": 0.18602621834725142,
177
+ "rewards/wer_reward_func": 0.7346231155097485,
178
+ "step": 13
179
+ },
180
+ {
181
+ "clip_ratio": 0.0,
182
+ "completion_length": 1960.375,
183
+ "epoch": 0.0224,
184
+ "grad_norm": 0.65625,
185
+ "kl": 0.015960810356773436,
186
+ "learning_rate": 8.750000000000001e-06,
187
+ "loss": 0.0006,
188
+ "reward": 0.8352436944842339,
189
+ "reward_std": 0.16693894029594958,
190
+ "rewards/wer_reward_func": 0.8352436944842339,
191
+ "step": 14
192
+ },
193
+ {
194
+ "clip_ratio": 0.0,
195
+ "completion_length": 1960.375,
196
+ "epoch": 0.024,
197
+ "grad_norm": 0.6484375,
198
+ "kl": 0.016010576975531876,
199
+ "learning_rate": 9.375000000000001e-06,
200
+ "loss": 0.0006,
201
+ "reward": 0.7679749764502048,
202
+ "reward_std": 0.1572786932811141,
203
+ "rewards/wer_reward_func": 0.7679749764502048,
204
+ "step": 15
205
+ },
206
+ {
207
+ "clip_ratio": 0.0,
208
+ "completion_length": 1970.25,
209
+ "epoch": 0.0256,
210
+ "grad_norm": 0.64453125,
211
+ "kl": 0.01612092077266425,
212
+ "learning_rate": 1e-05,
213
+ "loss": 0.0006,
214
+ "reward": 0.7296933978796005,
215
+ "reward_std": 0.17958847293630242,
216
+ "rewards/wer_reward_func": 0.7296933978796005,
217
+ "step": 16
218
+ },
219
+ {
220
+ "clip_ratio": 0.0,
221
+ "completion_length": 1969.625,
222
+ "epoch": 0.0272,
223
+ "grad_norm": 0.4765625,
224
+ "kl": 0.016002243966795504,
225
+ "learning_rate": 1.0625e-05,
226
+ "loss": 0.0006,
227
+ "reward": 0.5768027417361736,
228
+ "reward_std": 0.20896703843027353,
229
+ "rewards/wer_reward_func": 0.5768027417361736,
230
+ "step": 17
231
+ },
232
+ {
233
+ "clip_ratio": 0.0,
234
+ "completion_length": 1970.375,
235
+ "epoch": 0.0288,
236
+ "grad_norm": 0.546875,
237
+ "kl": 0.016185182612389326,
238
+ "learning_rate": 1.125e-05,
239
+ "loss": 0.0006,
240
+ "reward": 0.6374355666339397,
241
+ "reward_std": 0.1769604617729783,
242
+ "rewards/wer_reward_func": 0.6374355666339397,
243
+ "step": 18
244
+ },
245
+ {
246
+ "clip_ratio": 0.0,
247
+ "completion_length": 1943.75,
248
+ "epoch": 0.0304,
249
+ "grad_norm": 0.44921875,
250
+ "kl": 0.015821643988601863,
251
+ "learning_rate": 1.1875e-05,
252
+ "loss": 0.0006,
253
+ "reward": 0.6718243807554245,
254
+ "reward_std": 0.1820401716977358,
255
+ "rewards/wer_reward_func": 0.6718243807554245,
256
+ "step": 19
257
+ },
258
+ {
259
+ "clip_ratio": 0.0,
260
+ "completion_length": 1970.25,
261
+ "epoch": 0.032,
262
+ "grad_norm": 0.51953125,
263
+ "kl": 0.01559991657268256,
264
+ "learning_rate": 1.25e-05,
265
+ "loss": 0.0006,
266
+ "reward": 0.6518857628107071,
267
+ "reward_std": 0.2095832945778966,
268
+ "rewards/wer_reward_func": 0.6518857628107071,
269
+ "step": 20
270
+ },
271
+ {
272
+ "clip_ratio": 0.0,
273
+ "completion_length": 1969.125,
274
+ "epoch": 0.0336,
275
+ "grad_norm": 0.61328125,
276
+ "kl": 0.016375139821320772,
277
+ "learning_rate": 1.3125e-05,
278
+ "loss": 0.0007,
279
+ "reward": 0.622985552996397,
280
+ "reward_std": 0.21184765454381704,
281
+ "rewards/wer_reward_func": 0.622985552996397,
282
+ "step": 21
283
+ },
284
+ {
285
+ "clip_ratio": 0.0,
286
+ "completion_length": 1973.125,
287
+ "epoch": 0.0352,
288
+ "grad_norm": 0.51171875,
289
+ "kl": 0.016077731852419674,
290
+ "learning_rate": 1.375e-05,
291
+ "loss": 0.0006,
292
+ "reward": 0.6820645965635777,
293
+ "reward_std": 0.22380293253809214,
294
+ "rewards/wer_reward_func": 0.6820645965635777,
295
+ "step": 22
296
+ },
297
+ {
298
+ "clip_ratio": 0.0,
299
+ "completion_length": 1961.25,
300
+ "epoch": 0.0368,
301
+ "grad_norm": 0.46484375,
302
+ "kl": 0.015804292517714202,
303
+ "learning_rate": 1.4375e-05,
304
+ "loss": 0.0006,
305
+ "reward": 0.7151055857539177,
306
+ "reward_std": 0.163209717720747,
307
+ "rewards/wer_reward_func": 0.7151055857539177,
308
+ "step": 23
309
+ },
310
+ {
311
+ "clip_ratio": 0.0,
312
+ "completion_length": 1961.875,
313
+ "epoch": 0.0384,
314
+ "grad_norm": 1.0859375,
315
+ "kl": 0.016097142128273845,
316
+ "learning_rate": 1.5000000000000002e-05,
317
+ "loss": 0.0006,
318
+ "reward": 0.6770479343831539,
319
+ "reward_std": 0.16051876917481422,
320
+ "rewards/wer_reward_func": 0.6770479343831539,
321
+ "step": 24
322
+ },
323
+ {
324
+ "clip_ratio": 0.0,
325
+ "completion_length": 1959.875,
326
+ "epoch": 0.04,
327
+ "grad_norm": 0.65625,
328
+ "kl": 0.01587597408797592,
329
+ "learning_rate": 1.5625e-05,
330
+ "loss": 0.0006,
331
+ "reward": 0.7550022006034851,
332
+ "reward_std": 0.16675387474242598,
333
+ "rewards/wer_reward_func": 0.7550022006034851,
334
+ "step": 25
335
+ },
336
+ {
337
+ "clip_ratio": 0.0,
338
+ "completion_length": 1959.625,
339
+ "epoch": 0.0416,
340
+ "grad_norm": 0.91796875,
341
+ "kl": 0.0167808651458472,
342
+ "learning_rate": 1.6250000000000002e-05,
343
+ "loss": 0.0007,
344
+ "reward": 0.6954782530665398,
345
+ "reward_std": 0.202576438896358,
346
+ "rewards/wer_reward_func": 0.6954782530665398,
347
+ "step": 26
348
+ },
349
+ {
350
+ "clip_ratio": 0.0,
351
+ "completion_length": 1950.875,
352
+ "epoch": 0.0432,
353
+ "grad_norm": 0.6171875,
354
+ "kl": 0.01657955057453364,
355
+ "learning_rate": 1.6875e-05,
356
+ "loss": 0.0007,
357
+ "reward": 0.7669722959399223,
358
+ "reward_std": 0.12058468838222325,
359
+ "rewards/wer_reward_func": 0.7669722959399223,
360
+ "step": 27
361
+ },
362
+ {
363
+ "clip_ratio": 0.0,
364
+ "completion_length": 1956.375,
365
+ "epoch": 0.0448,
366
+ "grad_norm": 0.54296875,
367
+ "kl": 0.016129926429130137,
368
+ "learning_rate": 1.7500000000000002e-05,
369
+ "loss": 0.0006,
370
+ "reward": 0.7313148304820061,
371
+ "reward_std": 0.15226054703816772,
372
+ "rewards/wer_reward_func": 0.7313148304820061,
373
+ "step": 28
374
+ },
375
+ {
376
+ "clip_ratio": 0.0,
377
+ "completion_length": 1969.875,
378
+ "epoch": 0.0464,
379
+ "grad_norm": 1.1171875,
380
+ "kl": 0.01826122379861772,
381
+ "learning_rate": 1.8125e-05,
382
+ "loss": 0.0007,
383
+ "reward": 0.704025074839592,
384
+ "reward_std": 0.19533006672281772,
385
+ "rewards/wer_reward_func": 0.704025074839592,
386
+ "step": 29
387
+ },
388
+ {
389
+ "clip_ratio": 0.0,
390
+ "completion_length": 1973.125,
391
+ "epoch": 0.048,
392
+ "grad_norm": 0.90625,
393
+ "kl": 0.01747321046423167,
394
+ "learning_rate": 1.8750000000000002e-05,
395
+ "loss": 0.0007,
396
+ "reward": 0.7380140870809555,
397
+ "reward_std": 0.18625171668827534,
398
+ "rewards/wer_reward_func": 0.7380140870809555,
399
+ "step": 30
400
+ },
401
+ {
402
+ "clip_ratio": 0.0,
403
+ "completion_length": 1952.75,
404
+ "epoch": 0.0496,
405
+ "grad_norm": 0.859375,
406
+ "kl": 0.017565080081112683,
407
+ "learning_rate": 1.9375e-05,
408
+ "loss": 0.0007,
409
+ "reward": 0.6647357568144798,
410
+ "reward_std": 0.1767052042996511,
411
+ "rewards/wer_reward_func": 0.6647357568144798,
412
+ "step": 31
413
+ },
414
+ {
415
+ "clip_ratio": 0.0,
416
+ "completion_length": 1932.125,
417
+ "epoch": 0.0512,
418
+ "grad_norm": 0.50390625,
419
+ "kl": 0.01821358152665198,
420
+ "learning_rate": 2e-05,
421
+ "loss": 0.0007,
422
+ "reward": 0.7670070715248585,
423
+ "reward_std": 0.17231934261508286,
424
+ "rewards/wer_reward_func": 0.7670070715248585,
425
+ "step": 32
426
+ },
427
+ {
428
+ "clip_ratio": 0.0,
429
+ "completion_length": 1955.375,
430
+ "epoch": 0.0528,
431
+ "grad_norm": 0.369140625,
432
+ "kl": 0.017620138358324766,
433
+ "learning_rate": 1.9999859667149386e-05,
434
+ "loss": 0.0007,
435
+ "reward": 0.6896664723753929,
436
+ "reward_std": 0.080823797325138,
437
+ "rewards/wer_reward_func": 0.6896664723753929,
438
+ "step": 33
439
+ },
440
+ {
441
+ "clip_ratio": 0.0,
442
+ "completion_length": 1949.0,
443
+ "epoch": 0.0544,
444
+ "grad_norm": 0.58984375,
445
+ "kl": 0.019393826834857464,
446
+ "learning_rate": 1.9999438672536202e-05,
447
+ "loss": 0.0008,
448
+ "reward": 0.7308423668146133,
449
+ "reward_std": 0.22626487538218498,
450
+ "rewards/wer_reward_func": 0.7308423668146133,
451
+ "step": 34
452
+ },
453
+ {
454
+ "clip_ratio": 0.0,
455
+ "completion_length": 1959.25,
456
+ "epoch": 0.056,
457
+ "grad_norm": 0.65234375,
458
+ "kl": 0.018983268411830068,
459
+ "learning_rate": 1.9998737027976323e-05,
460
+ "loss": 0.0008,
461
+ "reward": 0.7887512892484665,
462
+ "reward_std": 0.11498277448117733,
463
+ "rewards/wer_reward_func": 0.7887512892484665,
464
+ "step": 35
465
+ },
466
+ {
467
+ "clip_ratio": 0.0,
468
+ "completion_length": 1960.375,
469
+ "epoch": 0.0576,
470
+ "grad_norm": 0.69140625,
471
+ "kl": 0.0207195149268955,
472
+ "learning_rate": 1.99977547531625e-05,
473
+ "loss": 0.0008,
474
+ "reward": 0.748592272400856,
475
+ "reward_std": 0.13713865308091044,
476
+ "rewards/wer_reward_func": 0.748592272400856,
477
+ "step": 36
478
+ },
479
+ {
480
+ "clip_ratio": 0.0,
481
+ "completion_length": 1935.125,
482
+ "epoch": 0.0592,
483
+ "grad_norm": 0.84765625,
484
+ "kl": 0.018717350903898478,
485
+ "learning_rate": 1.9996491875663833e-05,
486
+ "loss": 0.0007,
487
+ "reward": 0.796571895480156,
488
+ "reward_std": 0.07994944072561339,
489
+ "rewards/wer_reward_func": 0.796571895480156,
490
+ "step": 37
491
+ },
492
+ {
493
+ "clip_ratio": 0.0,
494
+ "completion_length": 1955.375,
495
+ "epoch": 0.0608,
496
+ "grad_norm": 0.50390625,
497
+ "kl": 0.019668580498546362,
498
+ "learning_rate": 1.9994948430924944e-05,
499
+ "loss": 0.0008,
500
+ "reward": 0.6918950304389,
501
+ "reward_std": 0.17351699527353048,
502
+ "rewards/wer_reward_func": 0.6918950304389,
503
+ "step": 38
504
+ },
505
+ {
506
+ "clip_ratio": 0.0,
507
+ "completion_length": 1959.5,
508
+ "epoch": 0.0624,
509
+ "grad_norm": 0.58203125,
510
+ "kl": 0.02082771761342883,
511
+ "learning_rate": 1.9993124462265045e-05,
512
+ "loss": 0.0008,
513
+ "reward": 0.771839089691639,
514
+ "reward_std": 0.15878148435149342,
515
+ "rewards/wer_reward_func": 0.771839089691639,
516
+ "step": 39
517
+ },
518
+ {
519
+ "clip_ratio": 0.0,
520
+ "completion_length": 1926.875,
521
+ "epoch": 0.064,
522
+ "grad_norm": 0.609375,
523
+ "kl": 0.023743279045447707,
524
+ "learning_rate": 1.9991020020876676e-05,
525
+ "loss": 0.0009,
526
+ "reward": 0.7313846871256828,
527
+ "reward_std": 0.1756739574484527,
528
+ "rewards/wer_reward_func": 0.7313846871256828,
529
+ "step": 40
530
+ },
531
+ {
532
+ "clip_ratio": 0.0,
533
+ "completion_length": 1951.5,
534
+ "epoch": 0.0656,
535
+ "grad_norm": 1.0078125,
536
+ "kl": 0.024232700234279037,
537
+ "learning_rate": 1.9988635165824293e-05,
538
+ "loss": 0.001,
539
+ "reward": 0.8415787816047668,
540
+ "reward_std": 0.08054527395870537,
541
+ "rewards/wer_reward_func": 0.8415787816047668,
542
+ "step": 41
543
+ },
544
+ {
545
+ "clip_ratio": 0.0,
546
+ "completion_length": 1967.875,
547
+ "epoch": 0.0672,
548
+ "grad_norm": 1.1796875,
549
+ "kl": 0.024361844873055816,
550
+ "learning_rate": 1.998596996404259e-05,
551
+ "loss": 0.001,
552
+ "reward": 0.7182202935218811,
553
+ "reward_std": 0.16461236914619803,
554
+ "rewards/wer_reward_func": 0.7182202935218811,
555
+ "step": 42
556
+ },
557
+ {
558
+ "clip_ratio": 0.0,
559
+ "completion_length": 1964.625,
560
+ "epoch": 0.0688,
561
+ "grad_norm": 1.2734375,
562
+ "kl": 0.025085279252380133,
563
+ "learning_rate": 1.9983024490334645e-05,
564
+ "loss": 0.001,
565
+ "reward": 0.697236530482769,
566
+ "reward_std": 0.1966134626418352,
567
+ "rewards/wer_reward_func": 0.697236530482769,
568
+ "step": 43
569
+ },
570
+ {
571
+ "clip_ratio": 0.0,
572
+ "completion_length": 1943.625,
573
+ "epoch": 0.0704,
574
+ "grad_norm": 0.84765625,
575
+ "kl": 0.024320798460394144,
576
+ "learning_rate": 1.99797988273698e-05,
577
+ "loss": 0.001,
578
+ "reward": 0.6831536628305912,
579
+ "reward_std": 0.2085254923440516,
580
+ "rewards/wer_reward_func": 0.6831536628305912,
581
+ "step": 44
582
+ },
583
+ {
584
+ "clip_ratio": 0.0,
585
+ "completion_length": 1952.0,
586
+ "epoch": 0.072,
587
+ "grad_norm": 0.54296875,
588
+ "kl": 0.030228571966290474,
589
+ "learning_rate": 1.9976293065681355e-05,
590
+ "loss": 0.0012,
591
+ "reward": 0.8000416941940784,
592
+ "reward_std": 0.15345202130265534,
593
+ "rewards/wer_reward_func": 0.8000416941940784,
594
+ "step": 45
595
+ },
596
+ {
597
+ "clip_ratio": 0.0,
598
+ "completion_length": 1948.0,
599
+ "epoch": 0.0736,
600
+ "grad_norm": 1.1171875,
601
+ "kl": 0.02587601402774453,
602
+ "learning_rate": 1.997250730366401e-05,
603
+ "loss": 0.001,
604
+ "reward": 0.6185710355639458,
605
+ "reward_std": 0.21584839094430208,
606
+ "rewards/wer_reward_func": 0.6185710355639458,
607
+ "step": 46
608
+ },
609
+ {
610
+ "clip_ratio": 0.0,
611
+ "completion_length": 1952.5,
612
+ "epoch": 0.0752,
613
+ "grad_norm": 0.486328125,
614
+ "kl": 0.026088092010468245,
615
+ "learning_rate": 1.9968441647571124e-05,
616
+ "loss": 0.001,
617
+ "reward": 0.7114294916391373,
618
+ "reward_std": 0.15749832591973245,
619
+ "rewards/wer_reward_func": 0.7114294916391373,
620
+ "step": 47
621
+ },
622
+ {
623
+ "clip_ratio": 0.0,
624
+ "completion_length": 1955.375,
625
+ "epoch": 0.0768,
626
+ "grad_norm": 0.78125,
627
+ "kl": 0.04521624161861837,
628
+ "learning_rate": 1.996409621151172e-05,
629
+ "loss": 0.0018,
630
+ "reward": 0.7588807716965675,
631
+ "reward_std": 0.15750665869563818,
632
+ "rewards/wer_reward_func": 0.7588807716965675,
633
+ "step": 48
634
+ },
635
+ {
636
+ "clip_ratio": 0.0,
637
+ "completion_length": 1955.875,
638
+ "epoch": 0.0784,
639
+ "grad_norm": 0.984375,
640
+ "kl": 0.035593433072790504,
641
+ "learning_rate": 1.995947111744728e-05,
642
+ "loss": 0.0014,
643
+ "reward": 0.6849986612796783,
644
+ "reward_std": 0.1458123391494155,
645
+ "rewards/wer_reward_func": 0.6849986612796783,
646
+ "step": 49
647
+ },
648
+ {
649
+ "clip_ratio": 0.0,
650
+ "completion_length": 1944.625,
651
+ "epoch": 0.08,
652
+ "grad_norm": 1.4453125,
653
+ "kl": 0.052820508601143956,
654
+ "learning_rate": 1.9954566495188333e-05,
655
+ "loss": 0.0021,
656
+ "reward": 0.7492151334881783,
657
+ "reward_std": 0.17276689689606428,
658
+ "rewards/wer_reward_func": 0.7492151334881783,
659
+ "step": 50
660
+ },
661
+ {
662
+ "clip_ratio": 0.0,
663
+ "completion_length": 1965.25,
664
+ "epoch": 0.0816,
665
+ "grad_norm": 1.0,
666
+ "kl": 0.06656047655269504,
667
+ "learning_rate": 1.9949382482390803e-05,
668
+ "loss": 0.0027,
669
+ "reward": 0.7457267493009567,
670
+ "reward_std": 0.1444834356661886,
671
+ "rewards/wer_reward_func": 0.7457267493009567,
672
+ "step": 51
673
+ },
674
+ {
675
+ "clip_ratio": 0.0,
676
+ "completion_length": 1965.375,
677
+ "epoch": 0.0832,
678
+ "grad_norm": 0.75,
679
+ "kl": 0.07145386259071529,
680
+ "learning_rate": 1.9943919224552154e-05,
681
+ "loss": 0.0029,
682
+ "reward": 0.8046427965164185,
683
+ "reward_std": 0.1403827196918428,
684
+ "rewards/wer_reward_func": 0.8046427965164185,
685
+ "step": 52
686
+ },
687
+ {
688
+ "clip_ratio": 0.0,
689
+ "completion_length": 1949.625,
690
+ "epoch": 0.0848,
691
+ "grad_norm": 0.6953125,
692
+ "kl": 0.03203753801062703,
693
+ "learning_rate": 1.9938176875007284e-05,
694
+ "loss": 0.0013,
695
+ "reward": 0.6930092461407185,
696
+ "reward_std": 0.16305453283712268,
697
+ "rewards/wer_reward_func": 0.6930092461407185,
698
+ "step": 53
699
+ },
700
+ {
701
+ "clip_ratio": 0.0,
702
+ "completion_length": 1951.875,
703
+ "epoch": 0.0864,
704
+ "grad_norm": 0.640625,
705
+ "kl": 0.10621223249472678,
706
+ "learning_rate": 1.993215559492426e-05,
707
+ "loss": 0.0042,
708
+ "reward": 0.8163558691740036,
709
+ "reward_std": 0.1400533178821206,
710
+ "rewards/wer_reward_func": 0.8163558691740036,
711
+ "step": 54
712
+ },
713
+ {
714
+ "clip_ratio": 0.0,
715
+ "completion_length": 1954.625,
716
+ "epoch": 0.088,
717
+ "grad_norm": 0.83203125,
718
+ "kl": 0.043898894684389234,
719
+ "learning_rate": 1.9925855553299755e-05,
720
+ "loss": 0.0018,
721
+ "reward": 0.6846126243472099,
722
+ "reward_std": 0.15134379279334098,
723
+ "rewards/wer_reward_func": 0.6846126243472099,
724
+ "step": 55
725
+ },
726
+ {
727
+ "clip_ratio": 0.0,
728
+ "completion_length": 1964.375,
729
+ "epoch": 0.0896,
730
+ "grad_norm": 0.40625,
731
+ "kl": 0.1627205451950431,
732
+ "learning_rate": 1.991927692695433e-05,
733
+ "loss": 0.0065,
734
+ "reward": 0.7908694818615913,
735
+ "reward_std": 0.18554856907576323,
736
+ "rewards/wer_reward_func": 0.7908694818615913,
737
+ "step": 56
738
+ },
739
+ {
740
+ "clip_ratio": 0.0,
741
+ "completion_length": 1956.0,
742
+ "epoch": 0.0912,
743
+ "grad_norm": 1.5859375,
744
+ "kl": 0.11634801258333027,
745
+ "learning_rate": 1.9912419900527467e-05,
746
+ "loss": 0.0047,
747
+ "reward": 0.7787131145596504,
748
+ "reward_std": 0.1848770366050303,
749
+ "rewards/wer_reward_func": 0.7787131145596504,
750
+ "step": 57
751
+ },
752
+ {
753
+ "clip_ratio": 0.0,
754
+ "completion_length": 1957.25,
755
+ "epoch": 0.0928,
756
+ "grad_norm": 0.61328125,
757
+ "kl": 0.046286205062642694,
758
+ "learning_rate": 1.9905284666472374e-05,
759
+ "loss": 0.0019,
760
+ "reward": 0.7016812488436699,
761
+ "reward_std": 0.1899972972460091,
762
+ "rewards/wer_reward_func": 0.7016812488436699,
763
+ "step": 58
764
+ },
765
+ {
766
+ "clip_ratio": 0.0,
767
+ "completion_length": 1955.125,
768
+ "epoch": 0.0944,
769
+ "grad_norm": 2.109375,
770
+ "kl": 0.175645818002522,
771
+ "learning_rate": 1.9897871425050598e-05,
772
+ "loss": 0.007,
773
+ "reward": 0.8039649501442909,
774
+ "reward_std": 0.1966583700850606,
775
+ "rewards/wer_reward_func": 0.8039649501442909,
776
+ "step": 59
777
+ },
778
+ {
779
+ "clip_ratio": 0.0,
780
+ "completion_length": 1956.75,
781
+ "epoch": 0.096,
782
+ "grad_norm": 1.046875,
783
+ "kl": 0.23953252588398755,
784
+ "learning_rate": 1.9890180384326404e-05,
785
+ "loss": 0.0096,
786
+ "reward": 0.7845720686018467,
787
+ "reward_std": 0.16813895315863192,
788
+ "rewards/wer_reward_func": 0.7845720686018467,
789
+ "step": 60
790
+ },
791
+ {
792
+ "clip_ratio": 0.0,
793
+ "completion_length": 1948.375,
794
+ "epoch": 0.0976,
795
+ "grad_norm": 1.109375,
796
+ "kl": 0.054209856782108545,
797
+ "learning_rate": 1.9882211760160924e-05,
798
+ "loss": 0.0022,
799
+ "reward": 0.7602999731898308,
800
+ "reward_std": 0.20474626123905182,
801
+ "rewards/wer_reward_func": 0.7602999731898308,
802
+ "step": 61
803
+ },
804
+ {
805
+ "clip_ratio": 0.0,
806
+ "completion_length": 1944.0,
807
+ "epoch": 0.0992,
808
+ "grad_norm": 0.53515625,
809
+ "kl": 0.05603330465964973,
810
+ "learning_rate": 1.9873965776206103e-05,
811
+ "loss": 0.0022,
812
+ "reward": 0.7994262725114822,
813
+ "reward_std": 0.16637277812696993,
814
+ "rewards/wer_reward_func": 0.7994262725114822,
815
+ "step": 62
816
+ },
817
+ {
818
+ "clip_ratio": 0.0,
819
+ "completion_length": 1967.125,
820
+ "epoch": 0.1008,
821
+ "grad_norm": 0.94140625,
822
+ "kl": 0.3190964236855507,
823
+ "learning_rate": 1.986544266389843e-05,
824
+ "loss": 0.0128,
825
+ "reward": 0.8898109868168831,
826
+ "reward_std": 0.0970578242558986,
827
+ "rewards/wer_reward_func": 0.8898109868168831,
828
+ "step": 63
829
+ },
830
+ {
831
+ "clip_ratio": 0.0,
832
+ "completion_length": 1949.125,
833
+ "epoch": 0.1024,
834
+ "grad_norm": 1.34375,
835
+ "kl": 0.11805844190530479,
836
+ "learning_rate": 1.9856642662452437e-05,
837
+ "loss": 0.0047,
838
+ "reward": 0.7181491330265999,
839
+ "reward_std": 0.13410304160788655,
840
+ "rewards/wer_reward_func": 0.7181491330265999,
841
+ "step": 64
842
+ },
843
+ {
844
+ "clip_ratio": 0.0,
845
+ "completion_length": 1924.375,
846
+ "epoch": 0.104,
847
+ "grad_norm": 0.51171875,
848
+ "kl": 0.030335136456415057,
849
+ "learning_rate": 1.984756601885398e-05,
850
+ "loss": 0.0012,
851
+ "reward": 0.7165433652698994,
852
+ "reward_std": 0.1613742959452793,
853
+ "rewards/wer_reward_func": 0.7165433652698994,
854
+ "step": 65
855
+ },
856
+ {
857
+ "clip_ratio": 0.0,
858
+ "completion_length": 1974.125,
859
+ "epoch": 0.1056,
860
+ "grad_norm": 0.65234375,
861
+ "kl": 0.1792802654672414,
862
+ "learning_rate": 1.9838212987853312e-05,
863
+ "loss": 0.0072,
864
+ "reward": 0.6876808255910873,
865
+ "reward_std": 0.20414888858795166,
866
+ "rewards/wer_reward_func": 0.6876808255910873,
867
+ "step": 66
868
+ },
869
+ {
870
+ "clip_ratio": 0.0,
871
+ "completion_length": 1944.125,
872
+ "epoch": 0.1072,
873
+ "grad_norm": 0.6953125,
874
+ "kl": 0.2141565252095461,
875
+ "learning_rate": 1.9828583831957935e-05,
876
+ "loss": 0.0086,
877
+ "reward": 0.7715155333280563,
878
+ "reward_std": 0.15179554466158152,
879
+ "rewards/wer_reward_func": 0.7715155333280563,
880
+ "step": 67
881
+ },
882
+ {
883
+ "clip_ratio": 0.0,
884
+ "completion_length": 1944.0,
885
+ "epoch": 0.1088,
886
+ "grad_norm": 0.82421875,
887
+ "kl": 0.20513050560839474,
888
+ "learning_rate": 1.9818678821425227e-05,
889
+ "loss": 0.0082,
890
+ "reward": 0.753034420311451,
891
+ "reward_std": 0.14856206998229027,
892
+ "rewards/wer_reward_func": 0.753034420311451,
893
+ "step": 68
894
+ },
895
+ {
896
+ "clip_ratio": 0.0,
897
+ "completion_length": 1966.375,
898
+ "epoch": 0.1104,
899
+ "grad_norm": 0.9921875,
900
+ "kl": 0.4696819600649178,
901
+ "learning_rate": 1.980849823425486e-05,
902
+ "loss": 0.0188,
903
+ "reward": 0.8376018479466438,
904
+ "reward_std": 0.13525405304972082,
905
+ "rewards/wer_reward_func": 0.8376018479466438,
906
+ "step": 69
907
+ },
908
+ {
909
+ "clip_ratio": 0.0,
910
+ "completion_length": 1926.0,
911
+ "epoch": 0.112,
912
+ "grad_norm": 0.91796875,
913
+ "kl": 0.2035030140541494,
914
+ "learning_rate": 1.9798042356181e-05,
915
+ "loss": 0.0081,
916
+ "reward": 0.7663351036608219,
917
+ "reward_std": 0.20173717802390456,
918
+ "rewards/wer_reward_func": 0.7663351036608219,
919
+ "step": 70
920
+ },
921
+ {
922
+ "clip_ratio": 0.0,
923
+ "completion_length": 1957.0,
924
+ "epoch": 0.1136,
925
+ "grad_norm": 0.66015625,
926
+ "kl": 0.5680657427292317,
927
+ "learning_rate": 1.978731148066428e-05,
928
+ "loss": 0.0227,
929
+ "reward": 0.8518649414181709,
930
+ "reward_std": 0.14899396104738116,
931
+ "rewards/wer_reward_func": 0.8518649414181709,
932
+ "step": 71
933
+ },
934
+ {
935
+ "clip_ratio": 0.0,
936
+ "completion_length": 1943.125,
937
+ "epoch": 0.1152,
938
+ "grad_norm": 0.486328125,
939
+ "kl": 0.044534852262586355,
940
+ "learning_rate": 1.977630590888357e-05,
941
+ "loss": 0.0018,
942
+ "reward": 0.7693819999694824,
943
+ "reward_std": 0.17239460709970444,
944
+ "rewards/wer_reward_func": 0.7693819999694824,
945
+ "step": 72
946
+ },
947
+ {
948
+ "clip_ratio": 0.0,
949
+ "completion_length": 1949.25,
950
+ "epoch": 0.1168,
951
+ "grad_norm": 0.58984375,
952
+ "kl": 0.32066047424450517,
953
+ "learning_rate": 1.9765025949727526e-05,
954
+ "loss": 0.0128,
955
+ "reward": 0.9195869937539101,
956
+ "reward_std": 0.09115095145534724,
957
+ "rewards/wer_reward_func": 0.9195869937539101,
958
+ "step": 73
959
+ },
960
+ {
961
+ "clip_ratio": 0.0,
962
+ "completion_length": 1947.5,
963
+ "epoch": 0.1184,
964
+ "grad_norm": 0.57421875,
965
+ "kl": 0.22461501089856029,
966
+ "learning_rate": 1.975347191978591e-05,
967
+ "loss": 0.009,
968
+ "reward": 0.7567244507372379,
969
+ "reward_std": 0.1430160580202937,
970
+ "rewards/wer_reward_func": 0.7567244507372379,
971
+ "step": 74
972
+ },
973
+ {
974
+ "clip_ratio": 0.0,
975
+ "completion_length": 1960.125,
976
+ "epoch": 0.12,
977
+ "grad_norm": 0.75390625,
978
+ "kl": 0.13243340514600277,
979
+ "learning_rate": 1.9741644143340707e-05,
980
+ "loss": 0.0053,
981
+ "reward": 0.7881599441170692,
982
+ "reward_std": 0.13415389298461378,
983
+ "rewards/wer_reward_func": 0.7881599441170692,
984
+ "step": 75
985
+ },
986
+ {
987
+ "clip_ratio": 0.0,
988
+ "completion_length": 1941.625,
989
+ "epoch": 0.1216,
990
+ "grad_norm": 0.90625,
991
+ "kl": 0.1920600552111864,
992
+ "learning_rate": 1.9729542952357045e-05,
993
+ "loss": 0.0077,
994
+ "reward": 0.8567590862512589,
995
+ "reward_std": 0.13653477816842496,
996
+ "rewards/wer_reward_func": 0.8567590862512589,
997
+ "step": 76
998
+ },
999
+ {
1000
+ "clip_ratio": 0.0,
1001
+ "completion_length": 1953.375,
1002
+ "epoch": 0.1232,
1003
+ "grad_norm": 0.466796875,
1004
+ "kl": 0.34102778718806803,
1005
+ "learning_rate": 1.9717168686473845e-05,
1006
+ "loss": 0.0136,
1007
+ "reward": 0.8028187304735184,
1008
+ "reward_std": 0.17851749807596207,
1009
+ "rewards/wer_reward_func": 0.8028187304735184,
1010
+ "step": 77
1011
+ },
1012
+ {
1013
+ "clip_ratio": 0.0,
1014
+ "completion_length": 1958.625,
1015
+ "epoch": 0.1248,
1016
+ "grad_norm": 0.56640625,
1017
+ "kl": 0.27146758884191513,
1018
+ "learning_rate": 1.9704521692994305e-05,
1019
+ "loss": 0.0109,
1020
+ "reward": 0.8368449658155441,
1021
+ "reward_std": 0.10253577493131161,
1022
+ "rewards/wer_reward_func": 0.8368449658155441,
1023
+ "step": 78
1024
+ },
1025
+ {
1026
+ "clip_ratio": 0.0,
1027
+ "completion_length": 1948.25,
1028
+ "epoch": 0.1264,
1029
+ "grad_norm": 0.671875,
1030
+ "kl": 0.31286598835140467,
1031
+ "learning_rate": 1.969160232687616e-05,
1032
+ "loss": 0.0125,
1033
+ "reward": 0.735517330467701,
1034
+ "reward_std": 0.24261212535202503,
1035
+ "rewards/wer_reward_func": 0.735517330467701,
1036
+ "step": 79
1037
+ },
1038
+ {
1039
+ "clip_ratio": 0.0,
1040
+ "completion_length": 1977.125,
1041
+ "epoch": 0.128,
1042
+ "grad_norm": 1.2734375,
1043
+ "kl": 0.4811452552676201,
1044
+ "learning_rate": 1.96784109507217e-05,
1045
+ "loss": 0.0192,
1046
+ "reward": 0.7436019517481327,
1047
+ "reward_std": 0.19373487099073827,
1048
+ "rewards/wer_reward_func": 0.7436019517481327,
1049
+ "step": 80
1050
+ },
1051
+ {
1052
+ "clip_ratio": 0.0,
1053
+ "completion_length": 1946.5,
1054
+ "epoch": 0.1296,
1055
+ "grad_norm": 1.5703125,
1056
+ "kl": 0.31637065787799656,
1057
+ "learning_rate": 1.9664947934767614e-05,
1058
+ "loss": 0.0127,
1059
+ "reward": 0.8187097907066345,
1060
+ "reward_std": 0.1552155721001327,
1061
+ "rewards/wer_reward_func": 0.8187097907066345,
1062
+ "step": 81
1063
+ },
1064
+ {
1065
+ "clip_ratio": 0.0,
1066
+ "completion_length": 1942.125,
1067
+ "epoch": 0.1312,
1068
+ "grad_norm": 0.5625,
1069
+ "kl": 0.16939618973992765,
1070
+ "learning_rate": 1.965121365687458e-05,
1071
+ "loss": 0.0068,
1072
+ "reward": 0.7827628552913666,
1073
+ "reward_std": 0.15352375875227153,
1074
+ "rewards/wer_reward_func": 0.7827628552913666,
1075
+ "step": 82
1076
+ },
1077
+ {
1078
+ "clip_ratio": 0.0,
1079
+ "completion_length": 1950.25,
1080
+ "epoch": 0.1328,
1081
+ "grad_norm": 0.73046875,
1082
+ "kl": 0.37901579844765365,
1083
+ "learning_rate": 1.9637208502516673e-05,
1084
+ "loss": 0.0152,
1085
+ "reward": 0.732394628226757,
1086
+ "reward_std": 0.14362181909382343,
1087
+ "rewards/wer_reward_func": 0.732394628226757,
1088
+ "step": 83
1089
+ },
1090
+ {
1091
+ "clip_ratio": 0.0,
1092
+ "completion_length": 1952.125,
1093
+ "epoch": 0.1344,
1094
+ "grad_norm": 0.61328125,
1095
+ "kl": 0.2999027846381068,
1096
+ "learning_rate": 1.9622932864770538e-05,
1097
+ "loss": 0.012,
1098
+ "reward": 0.8153776079416275,
1099
+ "reward_std": 0.09595204587094486,
1100
+ "rewards/wer_reward_func": 0.8153776079416275,
1101
+ "step": 84
1102
+ },
1103
+ {
1104
+ "clip_ratio": 0.0,
1105
+ "completion_length": 1954.375,
1106
+ "epoch": 0.136,
1107
+ "grad_norm": 0.58984375,
1108
+ "kl": 0.08584306994453073,
1109
+ "learning_rate": 1.9608387144304363e-05,
1110
+ "loss": 0.0034,
1111
+ "reward": 0.7668257392942905,
1112
+ "reward_std": 0.13263899134472013,
1113
+ "rewards/wer_reward_func": 0.7668257392942905,
1114
+ "step": 85
1115
+ },
1116
+ {
1117
+ "clip_ratio": 0.0,
1118
+ "completion_length": 1943.75,
1119
+ "epoch": 0.1376,
1120
+ "grad_norm": 0.640625,
1121
+ "kl": 0.12226048181764781,
1122
+ "learning_rate": 1.959357174936663e-05,
1123
+ "loss": 0.0049,
1124
+ "reward": 0.7079343125224113,
1125
+ "reward_std": 0.17419014684855938,
1126
+ "rewards/wer_reward_func": 0.7079343125224113,
1127
+ "step": 86
1128
+ },
1129
+ {
1130
+ "clip_ratio": 0.0,
1131
+ "completion_length": 1943.375,
1132
+ "epoch": 0.1392,
1133
+ "grad_norm": 1.078125,
1134
+ "kl": 0.28691139654256403,
1135
+ "learning_rate": 1.9578487095774666e-05,
1136
+ "loss": 0.0115,
1137
+ "reward": 0.7225043401122093,
1138
+ "reward_std": 0.23464243719354272,
1139
+ "rewards/wer_reward_func": 0.7225043401122093,
1140
+ "step": 87
1141
+ },
1142
+ {
1143
+ "clip_ratio": 0.0,
1144
+ "completion_length": 1937.625,
1145
+ "epoch": 0.1408,
1146
+ "grad_norm": 0.6484375,
1147
+ "kl": 0.16382792522199452,
1148
+ "learning_rate": 1.956313360690295e-05,
1149
+ "loss": 0.0066,
1150
+ "reward": 0.8292155712842941,
1151
+ "reward_std": 0.1263326636981219,
1152
+ "rewards/wer_reward_func": 0.8292155712842941,
1153
+ "step": 88
1154
+ },
1155
+ {
1156
+ "clip_ratio": 0.0,
1157
+ "completion_length": 1960.875,
1158
+ "epoch": 0.1424,
1159
+ "grad_norm": 0.50390625,
1160
+ "kl": 0.18305454682558775,
1161
+ "learning_rate": 1.9547511713671264e-05,
1162
+ "loss": 0.0073,
1163
+ "reward": 0.7428606450557709,
1164
+ "reward_std": 0.1822054407093674,
1165
+ "rewards/wer_reward_func": 0.7428606450557709,
1166
+ "step": 89
1167
+ },
1168
+ {
1169
+ "clip_ratio": 0.0,
1170
+ "completion_length": 1950.125,
1171
+ "epoch": 0.144,
1172
+ "grad_norm": 0.54296875,
1173
+ "kl": 0.2168082855641842,
1174
+ "learning_rate": 1.9531621854532562e-05,
1175
+ "loss": 0.0087,
1176
+ "reward": 0.8217585235834122,
1177
+ "reward_std": 0.13432517577894032,
1178
+ "rewards/wer_reward_func": 0.8217585235834122,
1179
+ "step": 90
1180
+ },
1181
+ {
1182
+ "clip_ratio": 0.0,
1183
+ "completion_length": 1932.5,
1184
+ "epoch": 0.1456,
1185
+ "grad_norm": 0.61328125,
1186
+ "kl": 0.41217829566448927,
1187
+ "learning_rate": 1.9515464475460692e-05,
1188
+ "loss": 0.0165,
1189
+ "reward": 0.8024833723902702,
1190
+ "reward_std": 0.16875174804590642,
1191
+ "rewards/wer_reward_func": 0.8024833723902702,
1192
+ "step": 91
1193
+ },
1194
+ {
1195
+ "clip_ratio": 0.0,
1196
+ "completion_length": 1951.25,
1197
+ "epoch": 0.1472,
1198
+ "grad_norm": 0.76171875,
1199
+ "kl": 0.2883884224575013,
1200
+ "learning_rate": 1.949904002993787e-05,
1201
+ "loss": 0.0115,
1202
+ "reward": 0.847618579864502,
1203
+ "reward_std": 0.12144165020436049,
1204
+ "rewards/wer_reward_func": 0.847618579864502,
1205
+ "step": 92
1206
+ },
1207
+ {
1208
+ "clip_ratio": 0.0,
1209
+ "completion_length": 1947.625,
1210
+ "epoch": 0.1488,
1211
+ "grad_norm": 0.80859375,
1212
+ "kl": 0.48620657715946436,
1213
+ "learning_rate": 1.9482348978941947e-05,
1214
+ "loss": 0.0194,
1215
+ "reward": 0.8414455056190491,
1216
+ "reward_std": 0.0853516417555511,
1217
+ "rewards/wer_reward_func": 0.8414455056190491,
1218
+ "step": 93
1219
+ },
1220
+ {
1221
+ "clip_ratio": 0.0,
1222
+ "completion_length": 1949.75,
1223
+ "epoch": 0.1504,
1224
+ "grad_norm": 0.77734375,
1225
+ "kl": 0.1605790094472468,
1226
+ "learning_rate": 1.946539179093347e-05,
1227
+ "loss": 0.0064,
1228
+ "reward": 0.753880750387907,
1229
+ "reward_std": 0.1360992002300918,
1230
+ "rewards/wer_reward_func": 0.753880750387907,
1231
+ "step": 94
1232
+ },
1233
+ {
1234
+ "clip_ratio": 0.0,
1235
+ "completion_length": 1947.625,
1236
+ "epoch": 0.152,
1237
+ "grad_norm": 0.859375,
1238
+ "kl": 0.1815213665831834,
1239
+ "learning_rate": 1.944816894184255e-05,
1240
+ "loss": 0.0073,
1241
+ "reward": 0.7713245637714863,
1242
+ "reward_std": 0.16424694866873324,
1243
+ "rewards/wer_reward_func": 0.7713245637714863,
1244
+ "step": 95
1245
+ },
1246
+ {
1247
+ "clip_ratio": 0.0,
1248
+ "completion_length": 1955.125,
1249
+ "epoch": 0.1536,
1250
+ "grad_norm": 0.66796875,
1251
+ "kl": 0.45056375954300165,
1252
+ "learning_rate": 1.9430680915055492e-05,
1253
+ "loss": 0.018,
1254
+ "reward": 0.8012058958411217,
1255
+ "reward_std": 0.1394583999644965,
1256
+ "rewards/wer_reward_func": 0.8012058958411217,
1257
+ "step": 96
1258
+ },
1259
+ {
1260
+ "clip_ratio": 0.0,
1261
+ "completion_length": 1935.0,
1262
+ "epoch": 0.1552,
1263
+ "grad_norm": 1.0234375,
1264
+ "kl": 0.424512492492795,
1265
+ "learning_rate": 1.941292820140122e-05,
1266
+ "loss": 0.017,
1267
+ "reward": 0.7339756079018116,
1268
+ "reward_std": 0.22004147712141275,
1269
+ "rewards/wer_reward_func": 0.7339756079018116,
1270
+ "step": 97
1271
+ },
1272
+ {
1273
+ "clip_ratio": 0.0,
1274
+ "completion_length": 1952.875,
1275
+ "epoch": 0.1568,
1276
+ "grad_norm": 0.89453125,
1277
+ "kl": 0.26004820922389627,
1278
+ "learning_rate": 1.9394911299137522e-05,
1279
+ "loss": 0.0104,
1280
+ "reward": 0.8070669919252396,
1281
+ "reward_std": 0.2013562674401328,
1282
+ "rewards/wer_reward_func": 0.8070669919252396,
1283
+ "step": 98
1284
+ },
1285
+ {
1286
+ "clip_ratio": 0.0,
1287
+ "completion_length": 1972.75,
1288
+ "epoch": 0.1584,
1289
+ "grad_norm": 1.0234375,
1290
+ "kl": 0.6008601551875472,
1291
+ "learning_rate": 1.9376630713937043e-05,
1292
+ "loss": 0.024,
1293
+ "reward": 0.8224332630634308,
1294
+ "reward_std": 0.1132977275410667,
1295
+ "rewards/wer_reward_func": 0.8224332630634308,
1296
+ "step": 99
1297
+ },
1298
+ {
1299
+ "clip_ratio": 0.0,
1300
+ "completion_length": 1930.75,
1301
+ "epoch": 0.16,
1302
+ "grad_norm": 1.015625,
1303
+ "kl": 0.45082843746058643,
1304
+ "learning_rate": 1.9358086958873116e-05,
1305
+ "loss": 0.018,
1306
+ "reward": 0.8571085333824158,
1307
+ "reward_std": 0.10337967309169471,
1308
+ "rewards/wer_reward_func": 0.8571085333824158,
1309
+ "step": 100
1310
+ }
1311
+ ],
1312
+ "logging_steps": 1,
1313
+ "max_steps": 625,
1314
+ "num_input_tokens_seen": 0,
1315
+ "num_train_epochs": 1,
1316
+ "save_steps": 100,
1317
+ "stateful_callbacks": {
1318
+ "TrainerControl": {
1319
+ "args": {
1320
+ "should_epoch_stop": false,
1321
+ "should_evaluate": false,
1322
+ "should_log": false,
1323
+ "should_save": true,
1324
+ "should_training_stop": false
1325
+ },
1326
+ "attributes": {}
1327
+ }
1328
+ },
1329
+ "total_flos": 0.0,
1330
+ "train_batch_size": 4,
1331
+ "trial_name": null,
1332
+ "trial_params": null
1333
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff