Neural knowledge-to-text era fashions typically battle to faithfully generate descriptions for the enter info: they might produce hallucinations that contradict the given info, or describe info not current within the enter. To scale back hallucinations, we suggest a novel decoding methodology, TWEAK (Suppose Whereas Successfully Articulating Information). TWEAK treats the generated sequences at every decoding step and its future sequences as hypotheses, and ranks every era candidate based mostly on how effectively their corresponding hypotheses assist the enter info utilizing a Speculation Verification Mannequin (HVM). We first exhibit the effectiveness of TWEAK through the use of a Pure Language Inference (NLI) mannequin because the HVM and report improved faithfulness with minimal affect on the standard. We then substitute the NLI mannequin with our task-specific HVM educated with a first-of-a-kind dataset, FATE (Reality-Aligned Textual Entailment), which pairs enter info with their trustworthy and hallucinated descriptions with the hallucinated spans marked. The brand new HVM improves the faithfulness and the standard additional and runs quicker. General the very best TWEAK variants enhance on common 2.22/7.17 factors on faithfulness measured by FactKB over WebNLG and TekGen/GenWiki, respectively, with solely 0.14/0.32 factors degradation on high quality measured by BERTScore over the identical datasets. Since TWEAK is a decoding-only strategy, it may be built-in with any neural generative mannequin with out retraining.