What we have done
We have reflected on the feedback from the mid-crit..
We have contacted Joel Brynielsson and got a long reply.
We have contacted one journalist that agreed to participate in a later stage of this process to give his impressions on our system.
We were also contacted by the Citizen Journalism group about a possible collaboration, (as was apparently suggested at the mid-crit). We had a meeting and described our projects in larger detail. While we didn’t find very strong links between our projects, we will keep in touch and try to help each other out if needed.
Finally, we have continued with our research, reading many papers in different areas.
What we will do
There is still some research to be done and then we need to look through our resources and start writing for the report. We will try to get an interview with Joel Brynielsson if he is available otherwise we will mail him some more questions. We still need to find good example articles that we can use for the report and for the presentation.
We still need to figure out what can be done, and if and which of the biases we should focus on.
There is still a lot of future challenges that we need to deal with. Can our system be anything more than fact checking?
We have to find a balance between prioritizing “well recognized” articles and breaking the filter bubble for our users.
Changes in the project
So far so good...
We found some existing systems that deals with sentiment analysis such as:
WolframAlpha (http://www.wolframalpha.com) Answers NL questions such as for example “Who is the prime minister of Sweden? and What is the highest mountain in France?”
AlchemyAPI: (http://www.alchemyapi.com) NLP and sentiment analysis. Very cool!
We have read books and articles to get a grip on current research in detecting bias and quantifying it, as well as to get a view on how bias is viewed. Some of them are:
Lies: And the Lying Liars Who Tell Them - Al Franken
The book gives a humoristic and personal approach to bias and media manipulation in the US news. It is mostly focused on pointing out occasions where people are knowingly lying and misinforming through the media. We will be able to pull example articles out of this book in order to show how our system will identify factually false statements in news.
More voices Than Ever? - Yu-Ru Lin et al.
The article examines what news channel references what politicians in an attempt to quantify political bias.
The article tries to identify bias by looking at what news channels that references what think thanks in comparison to what legislators that do. The think thanks are politically categorised according to how many references they get by legislators. And then the news channels are politically labeled on what think thanks they cite. It’s unclear how this shows bias rather than political direction of the papers, which is in our view not entirely separable right now.
Visualizing Media Bias through Twitter - Jisun An et al.
The article looks at how users of twitter are in different media filter bubbles, thus arguing that there’s a bias due to polarization in what news you read.
Sentiment Analysis and Opinion Mining - Bing Liu
An overview of research done in Sentiment Analysis and Opinion Mining written in 2012 and aimed at students of the field. It is very comprehensive and we have yet to finish it, but what we read so far has been promising.
Have a nice day!