Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in TMLR, 2023
The work introduces Unified Instruction Tuning (UIT), a framework that standardizes instruction formats across datasets to boost model generalization.
Recommended citation: Liang, Shihao, et al. "Exploring format consistency for instruction tuning." arXiv preprint arXiv:2307.15504 (2023).
Download Paper
Published in ACL, 2024
A Benchmark that evalutes the debugging capabilities of LLMs.
Recommended citation: Tian, Runchu, et al. "Debugbench: Evaluating debugging capability of large language models." arXiv preprint arXiv:2401.04621 (2024).
Download Paper
Published in ICLR, 2024
The work presents ToolLLM, a framework that improves open-source LLMs’ tool-use capabilities by creating ToolBench, enabling models like LLaMA to use APIs effectively and perform comparably to ChatGPT.
Recommended citation: Qin, Yujia, et al. "Toolllm: Facilitating large language models to master 16000+ real-world apis." arXiv preprint arXiv:2307.16789 (2023).
Download Paper
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.