Whereas Automated Speech Recognition (ASR) methods are broadly utilized in many real-world functions, they usually don’t generalize effectively to new domains and have to be finetuned on information from these domains. Nevertheless, target-domain information is normally not available in lots of situations. On this paper, we suggest a brand new technique for adapting ASR fashions to new goal domains with none textual content or speech from these domains. To perform this, we suggest a novel information synthesis pipeline that makes use of a Massive Language Mannequin (LLM) to generate a goal area textual content corpus, and a state-of-the-art controllable speech synthesis mannequin to generate the corresponding speech. We suggest a easy but efficient in-context instruction finetuning technique to extend the effectiveness of LLM in producing textual content corpora for brand spanking new domains. Experiments on the SLURP dataset present that the proposed methodology achieves a mean relative phrase error charge enchancment of 28% on unseen goal domains with none efficiency drop in supply domains.