What follows is an abstract for a paper that I do not have time to write, but which I think would be a useful contribution to the literature, assuming what follows is actually true. Please feel free to do the actual research to find out. Let me know!
Conjoint experiments have experienced explosive growth in political science. The standard methodology for analysing them in political science involves using linear regression methods to estimate Average Marginal Component Effects (AMCEs, Hainmueller, Hopkins & Yamamoto 2014).
The summer is a great time for catching up on research, or for frittering away time on inessential infrastructure projects. After updating my RMarkdown templates for presentations and papers, I have just spent a few days migrating my website off of Wordpress onto Hugo. Hugo is a static site generator, which means that all the content management happens on my computer, and all that gets uploaded to my site are a collection of static pages.
I am posting the slides and audio here from a talk I gave at the LSE about the election, one day before it happened. Some things in here are right, some things in here are wrong. The polling estimates I was talking about were the penultimate estimates posted at https://today.yougov.com/us-election/ so the numbers there now are a bit different. Most of the value here post-election is the estimates of which kinds of voters were switching R to D and D to R versus 2012.
In the lead up to the UK referendum on EU membership, Doug Rivers and I posted an analysis of several weeks of YouGov polling data, using a methodology called multilevel regression and post-stratification (MRP). This is a different approach to analysing polling responses than the approach YouGov uses to analyse most of its UK polls, including those released immediately before and after the referendum on 23 June. The MRP approach, in addition to yielding several interesting findings that we discussed in that post on YouGov’s site regarding the interactions of age, educational qualifications, party and referendum vote, also aims to better correct for demographic imbalances in raw polling samples.
A couple years ago, I wrote a post on my journal review debt, which I defined as the difference between the number of peer reviews I had completed and the number I had caused other political scientists to write. I am going to be an Associate Editor for the American Political Science Review starting September 1, which is going to mess up this calculation of journal review debt because it does not take into account editorial work (an omission for which I will shortly be receiving my comeuppance).
The current controversy about a large scale experiment conducted in Montana by Stanford and Dartmouth political scientists raises several issues about research ethics in political science. To see a scan of the mailer that was sent to a random subset of Montana registered voters, follow this link.
Many of the objections I have read specifically refer to the form of the mailer. I am not going to engage with those objections here.
For a while, I have wondered just how many more peer reviews I have caused to be written than I have written myself. I suspect that this kind of journal review debt is more or less inevitable as an early-career scholar. So rather than write a review that is due today, I decided to go back through my records to figure this out before it became too overwhelming to do so.