TraceGen: World Modeling in 3D Trace-Space Enables Learning from Cross-Embodiment Videos
AI & ML interests
None defined yet.
Recent Activity
TraceVLA: Visual Trace Prompting Enhances Spatial-Temporal Awareness for Generalist Robotic Policies
-
TraceVLA: Visual Trace Prompting Enhances Spatial-Temporal Awareness for Generalist Robotic Policies
Paper • 2412.10345 • Published • 2 -
furonghuang-lab/tracevla_phi3v
Text Generation • Updated • 7 -
furonghuang-lab/tracevla_7b
Text Generation • 8B • Updated • 9 -
furonghuang-lab/openvla_phi3v
Text Generation • Updated
Easy2Hard-Bench offers six datasets with continuous difficulty ratings, enabling profiling of LLM performance and generalization across difficulties.
Benchmarking the Robustness of Image Watermarks. Under development. Data will be released soon.
Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language Models
-
furonghuang-lab/PHTest
Viewer • Updated • 3.27k • 93 • 3 -
AutoDAN: Interpretable Gradient-Based Adversarial Attacks on Large Language Models
Paper • 2310.15140 • Published • 1 -
Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language Models
Paper • 2409.00598 • Published
TraceGen: World Modeling in 3D Trace-Space Enables Learning from Cross-Embodiment Videos
TraceVLA: Visual Trace Prompting Enhances Spatial-Temporal Awareness for Generalist Robotic Policies
-
TraceVLA: Visual Trace Prompting Enhances Spatial-Temporal Awareness for Generalist Robotic Policies
Paper • 2412.10345 • Published • 2 -
furonghuang-lab/tracevla_phi3v
Text Generation • Updated • 7 -
furonghuang-lab/tracevla_7b
Text Generation • 8B • Updated • 9 -
furonghuang-lab/openvla_phi3v
Text Generation • Updated
Benchmarking the Robustness of Image Watermarks. Under development. Data will be released soon.
Easy2Hard-Bench offers six datasets with continuous difficulty ratings, enabling profiling of LLM performance and generalization across difficulties.
Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language Models
-
furonghuang-lab/PHTest
Viewer • Updated • 3.27k • 93 • 3 -
AutoDAN: Interpretable Gradient-Based Adversarial Attacks on Large Language Models
Paper • 2310.15140 • Published • 1 -
Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language Models
Paper • 2409.00598 • Published