Everybody has a voice and it can be heard – anywhere, everywhere, all at once, and in written form. What people say is all about how they feel. There is an increasingly important need to find unique ways to interpret these feelings.
The point of content is to generate an emotional reaction. No TV network, show or branded integration has ever been successful without emotionally connecting with their audience. There is a deeply important and rapidly growing necessity to measure and interpret how people react to what they watch, because these Emotional Reactions show and explain what is resonating with viewers in a way no other analysis can. However, to date, there is a deeply inaccurate assumption that it’s impossible to quantify those reactions. Canvs, with its patent-pending technology developed by co-founder Dr. Sam Hui, chief scientist, is able to interpret text on social media with an accuracy unmatched in the industry.
At Canvs, we’ve built a sophisticated system that parses and categorizes vast amounts of data. But what makes our system powerful is that we marry that machine-learning system to a human touch, ensuring that it correctly interprets our endlessly malleable dataset, the language of digital conversations.
Our expertise lies in identifying, sifting through, and categorizing Emotional Reactions. An Emotional Reaction is our proprietary metric and is any piece of social media content that contains an emotion.
However, anyone who has spent anytime online knows that the way we communicate online, particularly social — can be a bit complicated at times. People tend to use incorrect spelling, inaccurate grammar, and complex terminology. Sometimes people are known to tweet using more than one emotion in any given piece of content. Humans are complex, and the vast majority of social media analytics companies offer simplistic measuring capabilities that do not account for these complexities by bucketing emotions into generic positive, negative, neutral.
Canvs is one step ahead by owning the unique patent-pending approach of segmenting the Emotional Reactions expressed in social media content into a particular core emotion. We’re able to do so by empowering Canvs (and our team) with a mixture of science and judgment, a unique approach to quantitative analysis developed by Dr. Sam Hui, our lead data scientist and co-founder.
By analyzing text through our deep natural language processing and semantic analysis we are able to weed through the social media content that has no Emotional Response and filter into those that do while analyzing them against our machine-learning assisted technicians who work hard to keep Canvs as accurate as possible.
By using the perfect mix of science and human judgment, we’re able to throw sentiment aside for a more nuanced way of understanding how people are feeling. Humans don’t feel in ‘positive,’ ‘negative,’ or ‘neutral.’ We know this. We created a methodology which allows us to parse the many complex ways people express their feelings by laddering them up to specific core emotions. These core emotions give a holistic view of how people feel about a particular telecast, character, actor, moment, and others.
In developing and refining our tool, we’re guided by a framework we call SIFT:
Semi-Supervised — Algorithms have never served as an effective end all/be all to any platform. Anytime there’s an over-reliance on any particular algorithm there’s an opportunity for flaws, errors in prioritization, and general misspeak. Algorithms are only as good as the assumptions upon which they’re built. That’s why Canvs has built in a human touch to help curate results at scale. That’s vital for long-lasting quality control. It also allows us to constantly improve upon ourselves in an iterative and agile fashion. By way of example, are people now using a unique phrase online that didn’t exist a few months ago (‘on fleek,’ anyone?). We’re able to flag those linguistic cultural markers and add them into our algorithm so the next time our system observes them in social media content, Canvs is able to actively read and appropriately attribute those terms and phrases.
Interpretable — Canvs is the industry standard in understanding how people are Emotionally Reacting to TV content by taking short form text and aligning it to core emotions. It’s that simple. By keeping our methodology a straightforward mix of science and judgment, we ensure we’re the experts at the value we’re able to provide to the industry.
Filtration — Canvs filters the results into ready products that clients can use to make smart, actionable decisions by distilling the data into useful, digestible breakdowns of emotions.
Translatable — Canvs takes the core emotions that we identify and outputs them in a meaningful, actionable, and verifiable manner. The core emotions are familiar, and our key metrics, emotional reaction and reaction rate, are transparent measures based on the data inputs.
Canvs is a living and breathing entity. We are constantly evolving our database to ensure we are accounting for the latest slang and terminology and lingo being used across platform. We’ve perfected this system to such a degree that now emotions can be compared against any dataset with a level of standardization truly unique to the industry.