Code Hallucination

Date:

arXiv:2407.04831v1 Announce Type: new
Abstract: Generative models such as large language models are extensively used as code copilots and for whole program generation. However, the programs they generate often have questionable correctness, authenticity and reliability in terms of integration as they might not follow the user requirements, provide incorrect and/or nonsensical outputs, or even contain semantic/syntactic errors – overall known as LLM hallucination. In this work, we present several types of code hallucination. We have generated such hallucinated code manually using large language models. We also present a technique – HallTrigger, in order to demonstrate efficient ways of generating arbitrary code hallucination. Our method leverages 3 different dynamic attributes of LLMs to craft prompts that can successfully trigger hallucinations from models without the need to access model architecture or parameters. Results from popular blackbox models suggest that HallTrigger is indeed effective and the pervasive LLM hallucination have sheer impact on software development.

Share post:

Subscribe

Popular

More like this
Related

H2 Clipper 항공 우주 제조에 로봇 떼를 배치 할 계획

Swarm Robotics에 의해 구동되는 미래의 항공 우주 제조 시설의...

서비스 로봇 공학은 Dallas에 자율 전달 로봇을 제공합니다

Serv의 최신 배송 로봇은 NVIDIA의 Jetson Orin 모듈을 사용하여...

Sanctuary AI는 강화 학습이 어떻게 유압 로봇 손을 제어 할 수 있는지 보여줍니다.

Sanctuary AI의 독점 로봇 그립퍼는 많은 활성의 자유도로 구분됩니다....

우크라이나의 드론이 러시아의 재밍을 때리는 방식

에스토니아 스타트 업 후 Krattworks 첫 번째 배치를 파견했습니다...