0%

About Me


Your Image Description

2023.5 at XiaMen

About Me

  • Hello, I'm Wei Liu (刘维). Here are my Email, Github and Google Scholar.
    • 2014-2018: Bachelor of Communication Engineering in BUPT
    • 2018-2021: Master of Computer Engineering in CIST Lab@BUPT
    • 2021-2023: NLP Researcher, Tencent
    • 2023.8-present: Working as a RA at THUNLP, supervised by Prof. Zhiyuan Liu
    • Now I am hiring for research interns! See details here

Research Interests

  • Natural Language Generation, especially on Compressing and Summarizing Languages.
  • Develop robust, safe and efficient LLM Multi-Agent Cooperation. It is still an early exploration and I am interested in several potential research directions, including:
    • Planning, memorizing, reasoning and scaling in LLM Multi-Agent
    • Agents as proxy of Human
    • Safe communication among agents
    • Make multi-agent trajectory improve LLMs

Industrial Experience

  • At Tencent, I aim to improve the performance of News Feed Recommendations and Advertising.
    • Improve the NLU ability for News Feed Recommendation by more accurate and controllable keyphrase prediction.
    • Introducing non-commercial behaviors into advertising modeling through graph modeling.
    • Model sequence behaviours and apply end-to-end feature quantization methods to perform stable feature engineering for Advertising.
    • Diverse user interest modeling in a diffusion-model way.
    • Achieve better efficiency/quality tradeoffs between single-tower and two-tower models during the recall/pre-rank stage in Advertising.

Publications

  • Multi-Agents powered by LLMs:
    • paper code Communicative Agents for Software Development
    • paper code Experiential Co-Learning of Software-Developing Agents
  • More Accurate and Controllable Keyphrase Prediction:
    • paper code UniKeyphrase: A Unified Extraction and Generation Framework for Keyphrase Prediction. ACL 2021 finding
    • paper code Fast and Constrained Absent Keyphrase Generation by Prompt-Based Learning. AAAI 2022
  • More Comprehensive and Factual Summarization:
    • paper code In Conclusion Not Repetition: Comprehensive Abstractive Summarization with Diversified Attention Based on Determinantal Point Processes. CoNLL 2021
    • paper code Subjective Bias in Abstractive Summarization. Arxiv
    • paper code CO2Sum: Contrastive Learning for Factual-Consistent Abstractive Summarization. Arxiv
    • paper A Multi-View Abstractive Summarization Model Jointly Considering Semantics and Sentiment. CCIS 2018
    • paper CIST@CLSciSumm-19: Automatic Scientific Paper Summarization with Citances and Facets. SIGIR 2019 shared task
    • paper code Multi-lingual Wikipedia Summarization and Title Generation On Low Resource Corpus. RANLP 2019 shared task
    • paper CIST@CL-SciSumm 2020, LongSumm 2020: Automatic Scientific Document Summarization. EMNLP 2020 shared task.