I have created this infographic to help summarise my experiences with auto sentiment analysis programs. I have used a few over the years and while they are getting better, they are far from perfect, and indeed are sometimes described as being no better than tossing a coin!
One peculiarity that I have noticed is that the sentiment engines which have undergone some form of sector-specific ‘training’ often seem to be more accurate.
For example I was checking some social media coverage on cigarettes and it was notable how poor the sentiment scoring was. Often the same tweet got re-tweeted with a different sentiment score. Then a week or so later I was looking at coverage on mobile phones and it was amazing how much more accurate the sentiment scoring was. Whereas in the first example I has changing possibly every other item, with the mobile phone sector it was more like one in every four items.
Leave a Reply