mysimpleshow – empowering explanation

How mysimpleshow’s Explainer Engine Works

3. Oct 2018

how mysimpleshow's explainer engine works

Product, Applications,

Facebook LinkedIn Twitter 

Balance is key! Not only do you require balance to maintain your body in a straight line, you eat a balanced diet to stay healthy and you aim to find a work-life balance to be happy. This idea also becomes relevant in the creation of explainer videos. Just like theory and practice, our mysimpleshow Explainer Engine has 2 parts. The part you see and use in regards to the steps you take to create the video, and the Artificial Intelligence (AI) which is the underlying complexity of how the engine really works. Let’s take a look at both sides of mysimpleshow and create a balanced knowledge of the tool.

How mysimpleshow’s Explainer Engine Works

Practice – Creating Videos

Script from Scratch or PowerPoint

First, you select a template before it’s time to write your script. This can be done either from scratch or you can integrate a PowerPoint with your prepared text.

Choosing the right images

Next, you can adapt the pre-selected keywords and images and rearrange them on the screen.

Publishing

Finally, you choose a voice and finalize the video. You can now share and download your video. If you want more details on the steps and a guide to create your next video, click here.

Theory – Understanding How the Explainer Engine Works

Semantic Recognition

mysimpleshow’s Explainer Engine recognizes the relevant keywords from your script. This means it is programmed to identify the words and their relation to the context. Then, it makes a decision about their relevance as it is limited to a number of keywords. In the matchmaking process it links each selected keyword with a visual representative. Again, an algorithm chooses the most relevant image but also provides a selection of similar images to choose from.

Global Knowledge Integration

The matching process integrates existing information and knowledge that is activated in the semantic recognition process. The software is programmed to make connections with the words. It recognizes names and may provide an image of a woman e.g. when choosing “Janet” as a keyword.

Layout Automation

The Explainer Engine not only draws conclusions and makes connections between words and meaning, it also applies logical thinking. In practice you can observe this phenomenon in the choice of spacing as gaps are left between keywords and their chosen images. Although limited to 7 keywords it simultaneously provides second choices in case the user deletes certain words or wants to choose a different keyword. The software also recognizes listings in the text and can display them visually in a group format. The engine shows its receptive skills by recognizing relations of words and implementing these visually.

Text-to-Speech

mysimpleshow uses text-to-speech (TTS), a speech synthesis converting text into verbal output. It is a software that is implemented in a 6 step model. A person/actor records large amounts of speech units which are then differentiated into linguistic categories such as phrases, morphemes and phones. Similar to the visual image database of the engine, a speech database is created. The written text is recognized by this database and matched with existing speech units. These are then aligned and voice output is produced.

In summary, it’s all about balance. The engine is a software that runs a number of processes which enable the system to identify keywords and matching visual support. A TTS system takes care of the speech production and creates the voice to your video.