QAI Speech to Image Capture

Research in 2016 demonstrated conclusively for the first time that the voice of a human is a backup signal to what their faces look like and vice versa.4 This provides the basis for a novel QAI Digital Physics machine learning application where QAI samples the voice sig- nature of a human subject and then draws that person’s composite face.

When QAI is exposed to face edge detected encodings and the encoded voices that correspond to them, the result are iMolecules, which are the basis for QAI drawing a composite of the persons face from the QAI analyzed voice sample.

When a new voice is exposed to the QAI iMolecular structures, they appear as iMolecule structural analogs to the QAI system.5 Depending on how similar the structural analog is to the original iMolecular structure, QAI line draws the person’s face from the captured voice signal. QAI learning and recognition is unbiased, requires no training and functions in real-time. In contrast to current supervised, unsupervised and reinforcement learning algorithms, which have not fundamentally changed in over 30 years, QAI represents the first novel machine learning system of the 21 century.