Hand gesture recognition is changing into a extra prevalent mode of human-computer interplay, particularly as cameras proliferate throughout on a regular basis gadgets. Regardless of continued progress on this discipline, gesture customization is commonly underexplored. Customization is essential because it allows customers to outline and reveal gestures which are extra pure, memorable, and accessible. Nonetheless, customization requires environment friendly utilization of user-provided knowledge. We introduce a way that allows customers to simply design bespoke gestures with a monocular digicam from one demonstration. We make use of transformers and meta-learning methods to handle few-shot studying challenges. In contrast to prior work, our methodology helps any mixture of one-handed, two-handed, static, and dynamic gestures, together with totally different viewpoints. We evaluated our customization methodology via a person examine with 20 gestures collected from 21 members, attaining as much as 97% common recognition accuracy from one demonstration. Our work offers a viable path for vision-based gesture customization, laying the inspiration for future developments on this area.