Grading (Digital) Campaign Week: Unprecedented Times, Unprecedented Measures


Let us convince you with greek letters, lots of them.


A year ago, when we penned the inaugural installment of this report, it was still funny (and not wildly insensitive) to “toilet-paper” a house, many countries’ economies were sailing towards calmer waters, and you had to pay for oil (as opposed to now paying someone to get it off of your hands). It’s safe to say that much has changed since then. However, we’re still here, which means that Campaign Week, and the steadfast tradition that buttresses it, held strong against the winds of change and traipsed on, albeit in a slightly different manner, with the Quarterly team in tow, ticking boxes and collecting data in its gargantuan shadow. 

For those unfamiliar, this report will strive to quantify Campaign Week and everything that falls under its now-digital umbrella through the optic of list performance. This is to say that, at the end of this article, we will arrive at a grade for each list that represents a litany of indicators we held to be considerable and valuable in assessing the absolute and relative performance of a given list. Of course, if you’re here for just that and trust our methods, you can skip the fuzzy math and head straight for the last paragraph; otherwise, bear with us as we walk you through our updated algorithm and new indicators, churning out final grades together and giving you a better sense of the mechanisms that generate them.

In the build-up to this article, our team revisited the two cardinal indicators we had chosen as caketoppers, being “reach” and “likeability”, and plugged data holes and insufficiencies where we found them, now factoring nationality into our “Diversity Indicator”, redefining “perfect equality”, which we will discuss at length further along, and giving bonus points to lists who decided to maintain a website throughout the campaign, among other smaller but significant tweaks. To complement this, we also divorced voter engagement from likeability and included sundry data points that would have not been available to us had we not had a digital campaign week, adapting to a new online format as lists had. Our “Video Multiplier” remains largely unchanged, but is now accompanied by several other indicators which help form a more complete picture of voter engagement with each list. Our secret indicator, for those who know, stands firm, like a bunyan tree. It’s not going anywhere anytime soon. 

Our improved algorithm now runs as a function of eighteen smaller indicators, spilling into diversity, reach, voter engagement, and likeability tranches. In short, if you don’t hear from a list, or do not feel represented by it, you won’t attend its events and you won’t vote for it. Similarly, if you’re not the biggest fan of certain members of a given list, you won’t vote for them. We took these qualitative hypotheticals into account, informing our weighting system and composition of the aforementioned indicators – at the end of the day, each list’s grade is a numerical representation of the nexus between that list and an average voter, taken speculatively and from the data voters themselves have provided, both willingly and passively. In the following paragraphs, we will cover each indicator thoroughly, making sure to keep explanations simple and the math simpler.  First up: diversity.

Diversity

The diversity indicator is comprised of different ratios covering three areas: association participation, nationality, and program representation. The former enables us to gauge the size of a list’s peer network, those who they may either know casually or have gotten close with by way of shared interest. The larger a list’s aggregate association participation, the more likely it is that people know of and are friendly with them, increasing likelihood of votes in their favor. We also deemed nationalities to be an important indicator; given that the Reims campus is a highly international campus, playing on very salient nationalist heart strings between country compatriots increases the potential for students to feel represented by a list. Lastly, program representation assesses the EURAM/EURAF split within each list with respect to the split in the student population, and reconciles the two.

Association Participation Ratio

The associations the members of a list belong to were compiled, and we allowed for association overlap (meaning that if two list members were part of the same association, we count this twice, instead of once). Additionally, we included involvement with permanent bureaus in the total number of associations, in lieu of creating a separate “past bureau experience” indicator, doing so because being part of a bureau grants a “baby” several advantages, chiefly visibility and experience, which compensate, if not exceed, the loss in reach this may cause.  

Using this association participation number, we established a ratio where the numerator ? is a list’s aggregate association participation, and the denominator ? is the highest association participation number among all lists. As such, the association with the highest association participation number gets a perfect score of 1 through this ratio, and others are scored relatively. As you may note as we go along further, we oscillate between attributing absolute and relative scores to certain datasets – there is always reason for this, and we will do our best to explain them each time.

Association Participation Ratio

Nationalities Ratio

We compiled the nationalities of the members of each list. Using the same formula as for the association participation ratio, we formed a ratio where the numerator ? is a list’s number of nationalities, and the denominator ? is the highest number of nationalities among all lists (the number of nationalities is based on information published in the Sundial Press interviews of each list).

Nationalities Ratio

Program Representation

In our article on campaign week last year, the notion of deviance was used to assess the discrepancies between the number of EURAM and EURAF students within lists. A list consisting of 14 EURAMs, stated the example, will most likely not capture the EURAF vote. The idea was to refer to “perfect equality” as a goal for lists to feasibly achieve. Upon further reflection, we found that a list composed of 7 EURAMs and 7 EURAFs might not efficiently capture desired votes, as perfect equality does not match the campus population distribution. 

Exchanges being excluded from running as part of a list and, therefore, from these proportions, the EURAF program represents, roughly speaking, 1/3 of the student population, while the EURAM program represents the remaining 2/3 of the student population. Consequently, our definition of “perfection”  has been tweaked from a clean 50-50 split to a ⅓ EURAF and 2/3 EURAM representativity goal, most effective for lists to glean votes from the entire student population.

To obtain this multiplier, we first calculate the deviance of a list from program representativity. For instance, if a list has 12 members, with 10 EURAMs and 2 EURAFs, the deviance from 1/3 and 2/3 representativity is  2 (12/3-2=2). We use the following expression, where p is program representativity and β is deviance. 

Program Representation

Online Reach

The Online Reach Indicator is divided into three components. The Social Media Reach multiplier reflects the extent to which lists were able to reach people through Facebook, Instagram and the like, especially important skills in this online campaign; again, if voters are not aware of a list, they will not vote for it. The Website Indicator does not measure the quality of a website: we have yet to find a way to do that at The Quarterly. Instead, it sticks to a binary indicator: if a list maintains a website, this expands potential reach, giving them a possible edge over a list that does not. More qualitatively, lack of a website may be interpreted as a sign of reduced investment in the campaign, be that true or far from it. Finally, our last indicator of Online Reach considers the official Q&A time and meeting hours dedicated to voters discovering and engaging with the lists. We apply a simple ratio comparing which lists granted voters more time to meet them, and ask them questions; in short, this indicator serves to estimate which lists were more “accessible” than others. 

Social Media Reach

Firstly, we found the number of followers each list had on their Facebook pages – we used this data to form a ratio (FBP Ratio = FaceBook Page Ratio), similar to earlier, with the numerator corresponding to a list’s number of followers and the denominator to the highest number of followers among lists vying for a certain bureau. In this sense, we did not cross-compare followers of a BDE list and those of a BDA list, as these lists are not competing for the same vote. 

Then, we reviewed which list had also created an individual Facebook account, in addition to their pages. We compared the number of friends each of these accounts had, and made a ratio out of this (FBF Ratio = Facebook Friends Ratio). As only four out of seven lists had created an individual Facebook account, we filled in the number of friends of the accounts of these lists and gave a null score to the other lists. The lists’ individual accounts were used to send event invitations directly to students – this explains why these four lists ranked higher than the others on our Average Event Engagement Indicator (you will find this explained later). As not being able to send official event invitations directly to voters, at least as the list, is an impediment, lists without an individual Facebook account were de facto sanctioned by receiving a FBF score of 0, valued at 10% of the overall Social Media Reach multiplier.

Thirdly, we examined the number of Instagram followers for each lists. We designed a similar ratio to the previous ones used above (IGF Ratio = Instagram Followers Ratio).

To compute the social media reach multiplier, we multiply the FBP Ratio with the IGF Ratio, weighting the product with a 0.9 coefficient and summing it with the FBF Ratio, weighted with a  0.1 coefficient.

Website indicator

As campaigns took place exclusively online, websites matter even more: a student eager to find out more about a list will be hard-pressed to scroll down a rabbit’s hole picture gallery on Instagram, up to 111 posts for BirDE. A website is the perfect tool to organize all of the information a list might want to convey, and it enjoys the advantage of demonstrating to voters an important level of dedication and investment. Thus, lists having a website were credited with a 1, and lists without a website with a 0 (counting for 10% of the Online Reach Indicator).

Q&A+M Ratio (referring to the official Q&A and Meeting hours that were dedicated to discovering and engaging with the lists during the week)

The focus of this ratio is the number of hours dedicated to students being able to meet the team and ask questions, as indicated on official schedules. This ratio evaluates an essential component of “reachability”: it is one thing to expose students to events and activities via posts; it is another to enable them to reach list members directly, yet another testament to commitment. 

To end up with the Online Reach Indicator, we attributed a different coefficient to each of our three components consistent with importance and reliability, before summing them all. We express Social Media Reach using ?, the Website Indicator using ?, and the Q&A+M Ratio using ?.

Engagement Indicator

The engagement indicator is made up of three parts. First, we study voter engagement with respect to each list’s campaign video, for many their first point of contact and first impression of a list – in a way, we see the video to be the flagship marker of a campaign, and give it a weight that reflects this gravity. While a failure at that step will not determine the outcome of the elections, making an impressive entrance is nonetheless important. Second, we aim our attention at all of the events organized by the lists, with the goal of determining which lists drew the most engagement. Third, we examine an unusual indicator: list mentions in personal Instagram stories, in order to complete our picture of online engagement during the campaign. 

Campaign Videos | Video View Ratio and Video Reaction Ratio

The Video View Ratio and the Video Reaction Ratio are very similar to the ratios applied last year, comparing views between lists vying for a certain bureau, but not across bureaus. It is here, though, that for the first time we face a large fork in the road – although the BDE is deemed to be a more popular bureau, our data this year shows that this did not translate into BDE lists receiving more views or reactions on their campaign videos. What’s more, BDE lists’ campaign videos didn’t surpass the 2000 view milepost; when compared with the two BDE campaign videos of last year, the three BDE campaign videos of this year received less aggregate views than the former two. Ergo, we decided to stay with inter-bureau comparison, although a strong case can be made to compare apples to oranges and pit all videos against one another (the video views and reactions used were noted down Sunday at 10:15PM). 

The Video View Ratio is similar to our other ratios. A list’s video views is indicated by ?, and the highest number of video views among lists competing for a given bureau with ?.

Video View Ratio

The second difference relates specifically to the Video Reaction Ratio: comments made by non-list members were added to the overall reaction calculations, which already included “love” reactions (coefficient of 1: doesn’t get much better), “wow” reactions (0.9 coefficient), “like” reactions (0.8 coefficient), and “haha” reactions (0.6 coefficient). Comments were attributed a coefficient of 1 (meaning that a comment is considered to be equivalent to a love reaction). Here, we express a list’s reactions with ? and the highest number of reactions among all lists with ?.

Average Event Engagement Ratio

Online event engagement is tricky to measure. To do so, we relied on the data provided by Facebook Analytics: we studied the “going” and “interested” responses in relation to the lists’ events. Given that voters participating in events do not necessarily tick “going” on the Facebook event, and given that, inversely, the students ticking “going” will not always attend an event, this data has to be considered as it is: imperfect, but workable.

First, we gathered the total amount of “going” responses for all of a list’s events taken en masse, and then did the same with the total number of “interested” responses. A “going” response is valued with a coefficient of 1, and an “interested” response is valued with a 0.5 coefficient. While an interested person might not ultimately participate to the event, the person has still partly engaged with the event by reading its description and thinking about joining it. We denote the average event engagement with ? and the highest average event engagement among all lists with ?.

Average Event Engagement Ratio (AEER)

List Mentions on Instagram Stories Ratio (LMIGS Ratio)

This indicator relies on the assumption that lists on Instagram have a tendency to repost all, or most of, the positive stories they are mentioned in. While this assumption might somehow limit the results obtained from this indicator, having a voter mention a list in his story is a concrete and measurabledemonstration of voter engagement. It underlines individual support for a list and especially has aninfluence on the friends and peers of the individual who decides to share the post. The data used to measure mentions was collected on the lists’ official Instagram accounts, from Thursday 2PM to Sunday 8PM. We express a list’s mentions with ? and the highest number of mentions among all lists with ?.

We multiply the Video View Ratio, the Video Reaction Ration, and the Average Event Engagement Ratio (AEER), weighting this multiplier with a 0.95 coefficient. We sum it to the LMGIS Ratio, weighted with a 0.05 coefficient, to obtain the Engagement Indicator.

Likeability Indicator

The Likeability Indicator is perhaps our most obvious yet most important indicator. Plainly stated, if you have a severe disdain for a list, you, in all likelihood won’t vote for it, no matter how much work it dedicated to its program or its campaign. In the words of George Bernard Shaw, you will “leave no turn unstoned”. Inversely, if you love a certain a list and its members, you probably will turn a blind eye to its shortcomings and controversies. We subdivided our indicator into four parts: Member Likeability, Promise Likeability, Follow-Back Propensity, and the Beer Poll Multiplier.  

To assess if voters like a list, we have to look at whether or not they like a list’s members, and the promises that come with them – these concerns being both covered by the first two indicators. The Follow-Back Propensity is a more reliable social media statistic that carries a twofold significance: it conveys a proportion of people that appreciate your list in some way and it reflects, at the same time, any form of mass following behaviour aimed at disproportionately increasing your followers. Finally, the Beer Poll Multiplier relays the overall feeling that voters have towards the different lists. If you don’t want to sit down for a cold one with them, it’s unlikely that you’ll consider their potential contribution to be paramount to their character. Having been used to rightly predict the outcome of every US presidential election since its creation, the Beer Poll will enable us to further confirm our Likeability Indicator as one of our top indicators. 

Member Likeability

This first indicator looks at the lists’ Instagram posts presenting their team members. We start by computing the total number of likes these posts received, before rendering an average of likes per posts. We apply the same methodology with the comments these posts received from non-list members, compiling the total number of comments and transforming it into an average as well.

To calculate Member Likeability, we create two ratios, one associated with the likes per post, and another to the comments per post. We express a list’s average of likes per post with ?, the highest average of likes per post with ?, a list’s average of comments per post with ?, and the highest average of comments per post with ?.

Member likeability

Promises’ Likeability

This second indicator relies similarly on the likes and comments (by non-list members) received on posts presenting a list’s promises. Averages of likes per post and comments per post are made as above. The difference here is the much lower number of overall comments that were made related to promises. Therefore, the impact of comments on the overall indicator was reduced from 15% above to 5% for Promises’ Likeability.

To calculate Promises’ Likeability, we again use two ratios. ? is a list’s average of likes per post , ? is the highest average of likes per post, ? is a list’s average of comments per post, and ? is the highest average of comments per post.

Promises’ likeability

Follow-Back Propensity Ratio

To determine Follow-Back Propensity, we use last year’s methodology: we divide the list’s number of followers by the number of accounts it follows. As it may seem unreasonable to expect a 1:1 Follow-Back Propensity, we form a ratio where the followers/following ratio, defined as ?, is divided by the highest followers/following ratio among all lists, denoted by ?.

Follow-Back Propensity Ratio

The Beer Poll Multiplier

The Beer Poll, with all the mythical weight associated with it, will serve as the key multiplier to our Likeability Indicator. Instead of asking “Who would you rather sit down for a cold one with?”, our question to Sciences Po students was: “On a scale from 1 to 10, how much would you enjoy having a beer with the following lists?” The poll was sent to non-listing Sciences Po students of both programs and in both years, as well as to exchange and 3rd year students, essentially to all those that could vote. Responses were limited to one per student, and we received 67 responses in total.

To convert each list’s grade into the Beer Poll Multiplier, we divided the average obtained from the poll for each list, expressed by?, by 10, to obtain a number between 0 and 1.

Beer Poll Multiplier

To compute the Likeability Indicator, we multiply the Members’ Likeability with the Promise Likeability and the Follow-Back Propensity Ratio, to which we assign a 0.20 coefficient. We then sum this product with our Beer Poll Multiplier, weighted with a 0.80 coefficient, to attain our final Likeability Indicator.

GIven that we’re nearing our final grades, we decided to throw the same wrench into our algorithm that we did last year, just to keep things interesting. Unbuttoning that top button, we welcome the Knowledge is Power Indicator back into the fray, unchanged and steady, tackling each list’s cumulative, well, knowledge. To quantify knowledge, we used the best proxy available, and thus computed the number of list members having bought at least one of the two editions of The Quarterly. 

The Knowledge is Power Indicator (KiP)

Last year, The Quarterly introduced this seemingly less-serious indicator to ask for the support of student publications and groups of students so that we could take on the role we feel we deserve: to matter in elections. We wrote then – “Media is a force in politics in the real world, why shouldn’t it be a force at Sciences Po?”

It has been a year, and sadly we have to ask this same question again. What we don’t need to question again, though, is the place of this indicator in our algorithm: last year, the KiS indicator got all the winners right, and so it earns its spot this year, and will continue to do so until it falls short. The Knowledge is Power Indicator defines ? as the number of list members having purchased one of our two editions of The Quarterly published this year, and ? as the total number of list members.

Knowledge is Power indicator

The Final Grade

To determine the final grade, we attribute a coefficient to each of our five indicators, before summing them all together. The Diversity Indicator receives a coefficient of 0.15, the Online Reach Indicator a 0.2, and the Engagement Indicator is weighted at 0.25. We apply a larger coefficient of 0.35 to the Likeability Indicator, as we consider Likeability to be of marginally greater importance. Finally, the Knowledge is Power Indicator, despite its proven quality of foresight, is credited a smaller 0.05 coefficient. Final grades are below.

To conclude, we’d like to note that, despite our reputation as election wonks, this article serves to grade, and not predict the outcome of elections. We are working on a way to communicate a model more akin to that of Nate Silver’s FiveThirtyEight data journal, which accounts for competition, especially in three-list scenarios, but for now, a simple “1-20” stamp will represent our closest effort to mimic the highly complex science of election prediction. While our grades last year did indeed correlate with the winning lists and predict with 100% accuracy the winners of the elections, this year, things have been shaken up in a way that prompts us to make this distinction. To take the BDE race, for instance, while Sciences P’Oasis achieved the best mark among all the lists running for BDE, we failed to introduce a more intricate runoff model that can account for vote-stealing and sharing, giving one list a clear advantage over the other two that is not expressed through our current algorithm. While this type of speculation lays in the domain of political data science and tricky game theory, we hope to get there eventually, with your support and feedback. We’re fine with being the lists’ professor for the time being.

End Note

The Quarterly would like to extend a sincere thank you to all of the lists that were understanding and flexible in helping us collect the data we needed to power this article. Considering just how tumultuous everything has been as of late, we laud them for what they made out of the first digital Campaign Week. We would also like to thank all of the students who took part in our Beer Poll as well, and who did so despite being overwhelmed with a deluge of standard Campaign Week shenanigans and undoubtedly affected by the situation we all currently find ourselves in. Election results will undoubtedly set the bar higher for next year’s Campaign Week data collection: what to change, what to improve, and what to pay greater attention to. In the meantime, our inboxes are open to every suggestion and comment. See you next year, and stay safe. It’s been a pleasure writing for you all. 

Leave a Reply

Your email address will not be published.

Close

Categories

Recent Posts

[wpgmza id="1"]
[contact-form-7 id="4" title="Contact form 1"]