About Me
- Hello, I'm Wei Liu (刘维). Here are my Email, Github and Google
Scholar. I am now opening for job opportunities, feel free to
contact me~
- 2014-2018: Bachelor of Communication Engineering in BUPT
- 2018-2021: Master of Computer Engineering in CIST Lab@BUPT
- 2021-2023: Application Research, Tencent
- 2023.8-present: Working at THUNLP with Prof. Zhiyuan Liu with a
focus on LLM Multi-Agent System.
- See my works here.
- The LLM Multi-Agent system for software development, ChatDev, has reached Github #1 for 3 times and has earned 25k 🌟!
- 2024.9 Our paper about connecting humans and agents in one system, iAgents, is accepted by NeurIPS 2024!. Try our online demo here.
Research Interests
- Natural Language Generation, especially on Compressing and Summarizing Languages.
- Memorization and reasoning in LLMs.
- Develop robust, safe, efficient, and human-centric LLM Multi-Agent System.
- Served as reviewer for ACL/ICLR/EMNLP/NeurIPS.
Industrial Experience
- At Tencent, I aim to improve the performance of News Feed
Recommendations and Advertising.
- Improving the NLU ability for News Feed Recommendation.
- Resolving the mismatch between commercial inclinations and content interests for Wechat Ads.
- Raise the stability, warm-up effects and efficiency/quality tradeoffs on Tencent Ads Recommendation System.
- Diverse user interest modeling.
Publications
- Multi-Agents powered by LLMs:
- paper code Autonomous Agents for Collaborative Task under Information Asymmetry. NeurIPS 2024
- paper code Communicative Agents for Software Development. ACL 2024
- paper code Experiential Co-Learning of Software-Developing Agents. ACL 2024
- paper code Iterative Experience Refinement of Software-Developing Agents. Arxiv
- paper code Scaling Large-Language-Model-based Multi-Agent Collaboration. Arxiv
- paper code Multi-Agent Software Development through Cross-Team Collaboration. Arxiv
- More Accurate and Controllable Keyphrase Prediction:
- More Comprehensive and Factual Summarization:
- paper code In Conclusion Not Repetition: Comprehensive Abstractive Summarization with Diversified Attention Based on Determinantal Point Processes. CoNLL 2021
- paper code Subjective Bias in Abstractive Summarization. Arxiv
- paper code CO2Sum: Contrastive Learning for Factual-Consistent Abstractive Summarization. Arxiv
- paper A Multi-View Abstractive Summarization Model Jointly Considering Semantics and Sentiment. CCIS 2018
- paper CIST@CLSciSumm-19: Automatic Scientific Paper Summarization with Citances and Facets. SIGIR 2019 workhop
- paper code Multi-lingual Wikipedia Summarization and Title Generation On Low Resource Corpus. RANLP 2019 workshop
- paper CIST@CL-SciSumm 2020, LongSumm 2020: Automatic Scientific Document Summarization. EMNLP 2020 workshop