This is not a very pleasant expression and I am only using it to illustrate what I see as a prescient threat to communications research. Google Sheets has had a ChatGPT plug-in since around the start of the year. As is often said ChatGPT is not a search engine. However, the Sheets extension can be instructed to go to a URL and execute a ChatGPT prompt. Since May of this year, I have used it to supplement my human sentiment tracking for a client’s references in UK-based online news outlets. The prompt I gave it was:
In no more than 2 words analyse the sentiment towards Client A using the following categories: mainly positive | slightly positive | neutral | slightly negative | mainly negative | for the content found at…URL
This is the rub… most of the time we did not agree. I have personally been tracking this client for eight years and consider myself to be proficient at scoring sentiment. Before that, I have been doing similar things for other clients since the mid-nineties.
I tried modifying the prompt in tandem but always found the one above to be the most accurate. How acuate was it? Of the sample of 2022 clips we concurred on 807 instances. That works out at 39.9% of the time. I am sure this will get better in time and I don’t mean to make this an over cautionary tale. My hope is it might make people realise that the shiny boxes and promises of the auto/speed media evaluation vendors should be approached with due caution. So, feel free to dispose of your human analysis, but do so while being aware of the limitations. I would welcome your thoughts…
Leave a Reply