In-context studying with Massive Language Fashions (LLMs) has emerged as a promising avenue of analysis in Dialog State Monitoring (DST). Nonetheless, the best-performing in-context studying strategies contain retrieving and including related examples to the immediate, requiring entry to labeled coaching information. Procuring such coaching information for a variety of domains and functions is time-consuming, costly, and, at occasions, infeasible. Whereas zero-shot studying requires no coaching information, it considerably lags behind the few-shot setup. Thus, ‘Can we effectively generate artificial information for any dialogue schema to allow few-shot prompting’? Addressing this query, we suggest SynthDST, a knowledge era framework tailor-made for DST, using LLMs. Our strategy solely requires the dialogue schema and some hand-crafted dialogue templates to synthesize pure, coherent, and free-flowing dialogues with DST annotations. Few-shot studying utilizing information from SynthDST leads to 4-5% enchancment in Joint Objective Accuracy over the zero-shot baseline on MultiWOZ 2.1 and a couple of.4. Remarkably, our few-shot studying strategy recovers practically 98% of the efficiency in comparison with the few-shot setup utilizing human-annotated coaching information