With the assistance of inventive immediate engineering and in-context studying, giant language fashions (LLMs) are identified to generalize nicely on a wide range of text-based pure language processing (NLP) duties. Nonetheless, for performing nicely on spoken language understanding (SLU) duties, LLMs both must be geared up with in-built speech modality or they should depend on speech-to-text conversion from an off-the-shelf automation speech recognition (ASR) system. On this work, we concentrate on the latter setup the place the accuracy of LLM on SLU duties is constrained by the accuracy of a frozen ASR system on the given speech enter. Particularly, we deal with the duty of speech intent classification the place a excessive word-error-rate (WER) implies that the LLM could not have the proper textual info to grasp the spoken intent. To alleviate this drawback, we suggest to immediate the LLM with an n-best listing of ASR hypotheses as a substitute of solely the error-prone 1-best speculation. We first discover prompting the LLM with descriptive prompts which clarify the idea of n-best lists to invoke LLM’s emergent talents to grasp the duty; adopted by finetuning of LoRA adapters on the intent classification process. We show the efficacy of our method on a binary device-directed speech detection process in addition to on a key phrase recognizing process on Google speech instructions dataset the place methods utilizing n-best listing prompts outperform those utilizing 1-best ASR outputs; thus paving manner for an environment friendly technique to use ASR uncertainty by way of LLMs for speech-based purposes.