scikkk commited on
Commit
9e84660
·
verified ·
1 Parent(s): ee3e738

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -32
README.md CHANGED
@@ -51,30 +51,9 @@ We introduce MathCoder-VL, a series of open-source large multimodal models (LMMs
51
 
52
 
53
  ## Usage
54
- For training and inference code, please refer to [InternVL](https://github.com/OpenGVLab/InternVL).
55
 
56
- ```
57
- from datasets import load_dataset
58
- from PIL import Image
59
- from io import BytesIO
60
-
61
- mm_mathinstruct = load_dataset("MathLLMs/MM-MathInstruct")
62
- print(mm_mathinstruct)
63
-
64
- # show the last image
65
- img = Image.open(BytesIO(mm_mathinstruct['train'][-1]['image']))
66
- img.show()
67
- ```
68
 
69
- It should print:
70
- ```
71
- DatasetDict({
72
- train: Dataset({
73
- features: ['id', 'image', 'question', 'solution', 'image_path'],
74
- num_rows: 2871988
75
- })
76
- })
77
- ```
78
 
79
  ### Prompt for TikZ Code Generation
80
 
@@ -109,17 +88,7 @@ Please provide the Python code needed to reproduce this image.\n<image>
109
  <img src="./examples/fig2.png" width="100%" title="Result Figure">
110
  </div>
111
 
112
- ## Construction of MathCoder-VL
113
 
114
- <div align="center">
115
- <img src="./examples/fig4.png" width="100%" title="Result Figure">
116
- </div>
117
-
118
- ## Performance
119
-
120
- <div align="center">
121
- <img src="./examples/tab1.png" width="100%" title="Result Figure">
122
- </div>
123
 
124
  ## **Citation**
125
 
 
51
 
52
 
53
  ## Usage
54
+ For training and inference code, please refer to [InternVL2-8B](https://huggingface.co/OpenGVLab/InternVL2-8B).
55
 
 
 
 
 
 
 
 
 
 
 
 
 
56
 
 
 
 
 
 
 
 
 
 
57
 
58
  ### Prompt for TikZ Code Generation
59
 
 
88
  <img src="./examples/fig2.png" width="100%" title="Result Figure">
89
  </div>
90
 
 
91
 
 
 
 
 
 
 
 
 
 
92
 
93
  ## **Citation**
94