Code Hallucination

Date:

arXiv:2407.04831v1 Announce Type: new
Abstract: Generative models such as large language models are extensively used as code copilots and for whole program generation. However, the programs they generate often have questionable correctness, authenticity and reliability in terms of integration as they might not follow the user requirements, provide incorrect and/or nonsensical outputs, or even contain semantic/syntactic errors – overall known as LLM hallucination. In this work, we present several types of code hallucination. We have generated such hallucinated code manually using large language models. We also present a technique – HallTrigger, in order to demonstrate efficient ways of generating arbitrary code hallucination. Our method leverages 3 different dynamic attributes of LLMs to craft prompts that can successfully trigger hallucinations from models without the need to access model architecture or parameters. Results from popular blackbox models suggest that HallTrigger is indeed effective and the pervasive LLM hallucination have sheer impact on software development.

Share post:

Subscribe

Popular

More like this
Related

4월 9일 정부지원사업 신규 공고 리스트 (124건) _ (파일 재가공/재배포 가능)

4월 9일 125건<4/9 지원사업 신규 공고 목록> *전 영업일인 4/8에...

Robotics Summit & Expo의 RBR50 Gala에서 최고의 로봇 혁신가를 축하

2024 RBR50 갈라에는 민첩성 로봇 공학의 올해의 로봇 인...

Saildrone, Thales Australia는 음향 센서 기술을 측량사 USV에 통합

Saildrone과 Thales 엔지니어는 Thales Bluesentry 어레이가 장착 된 Saildrone...

Cyngn은 자율 차량 및 클라우드 기술에 대한 특허를 추가합니다

Cyngn의 특허 포트폴리오에는 자율 주행 차량에 대한 AI 및...