Project Duration: 4 weeks

Domain: Machine Learning, Sociology

Tool: Weka


Project Overview

 

In today’s busy world, it is very difficult and time consuming to find the suitable partner, that is why many people turn to speed dating. This paper explores what attributes can impact people’s decision on speed dating, what it takes to become successful in getting approvals from a potential partner. This paper is a final project for course Applied Machine Learning, and it uses main techniques from machine learning to predict matching results.

 

Data Collection

 

The dataset I will explore in this project is called “Speed Dating Experiment”, it was compiled by Columbia Business School professors Ray Risman and Sheena Lyengar for their paper Gender Differences in Mate Selection: Evidence From a Speed Dating Experiment. The dataset was generated from a series of experimental speed dating events from 2002 to 2004 and includes data related to demographics, dating habits, lifestyle information, an attribute evaluation questionnaire taken when the participants sign up, and each participant’s ratings for others during the 4 minute interactions with the opposite sex. They were also asked to rate their date on six attributes: Attractiveness, Sincerity, Intelligence, Fun, Ambition, and Shared Interests. At the end of their four minutes, participants were asked if they would like to see their date again. If both people “accept,” then each is subsequently provided with the other’s contact information, and they are “matched” with each other.

 Baseline Performance

 
 
Table 1. First round baseline analysis with default settings

Table 1. First round baseline analysis with default settings

I chose six algorithms in WEKA to perform the first round baseline analysis with default settings. The algorithms have all introduced in our class and have been applied in our previous assignments: Naïve Bayes, Logistic, SMO, LWL, JRip, and J48. I used 10-fold cross validation and the results are shown in Table 1.

 

 

Data Exploration 

 

Considering the dataset still includes 66 features, I think it is necessary to further reduce or combine some unnecessary features. According to the code key documents, there are multiple times of rating for the six attributes, but focus on different aspects. For example, there is a question about “we want to know what you look for in the opposite sex”, and people are required to rate their expectations for the dating candidate from six aspects. Another question is about “what you think most of your fellow men/women look for in the opposite sex”, and the rating metric is the same six attributes. After second thoughts, I realized that what I concerned most are the attributes that people rated on the scorecard during the session, not what they stated/expected before the session nor what they reflected after the session.

Using feature selection classifier in WEKA also proved my assumption. I used CorrelationAttributeEval to do 10-folds cross validation, and the results are in table 2.

Inspired by this finding, I decided to choose a certain amount of features based on the rankings for two reasons: select the most related features to improve interpretability, and increase running efficiency to allow more extensive optimizations. The features are listed in table 3.

Table 2. Attribute selection output

Table 2. Attribute selection output

 
Table 3. Attributes selection results

Table 3. Attributes selection results

 

 

Error Analysis 

 

In order to better improve performance, I used WEKA classifier visualization and tried to look at the relationship among features. One key finding was that when I put one of the six attributes in X axis and put the partner  attribute in Y axis (for instance, I chose fun and fun_o), the matched instances (showing red in Graph 1) tend to cluster on the right top area, which means using the combination of fun and fun_o can improve performance.

 
 
Graph 1. WEKA classifier visualization

Graph 1. WEKA classifier visualization

 

Final Results

 

After two rounds of baseline performance and feature space selection, I decided to use three algorithms: Naïve Bayes, Logistic, and J48 to perform results. The results are represented in Kappa and Area_under_ROC. The performance has improved after the seven new features were added to the dataset. J48 is the algorithm that improved the most, from 0.70 ROC to 0.77 ROC. The optimization setting for J48 is C 0.92 M 3. However, the performance of Logistic algorithm is always better than Naïve Bayes and J48 (even after optimization). So my final model for this dataset is to use Logistic algorithm.

Discussion 

This paper can serve as a starting point to understand people’s preference when doing a speed dating. Prior work in economics has emphasized final matches, but the theory and empirics have not been well-suited to the study of how these matches are actually formed. In this paper I mainly explored six attribute that can directly or indirectly impact matching results. More attributes are needed for future exploration and to make better prediction.

The optimization results indicate that there is not much changes when changing parameters such as C and M. Although there is minor improvement for a single algorithm, the overall performance can still be less superior than other algorithm’s default settings. So one insight from this process is trying to make major improvement in error analysis phase, while optimization is like adjusting the radio micro tuner – it can help you to amend accuracy, but not major mistakes.

Another insight is that sometimes the simple algorithm can work better than complex ones. In my first round baseline performance, I chose both Logistic and SMO. Apparently SMO is more complex and it took longer time to run the analysis. But it turned out that Logistic works the best in all six algorithms. This finding indicates that being complex is not always better, when simple one works, just choose the simple solution.  

There are a number of ways that this paper can be improved. Most notably, this model could be applied to a broader set of subjects populations to examine the prediction result. A second improvement can be to develop data sets that have similar individual preferences and attributes, but focus on longer run relationship. A third improvement can be to integrate this model with existing recommendation system supported by matching companies, so their customers can get more accurate recommendation based on their background information and stated preferences.  Overall, a deeper understanding of dating preference together with better machine learning models can be a powerful tool to examine the dating market, as well as creating more opportunities for generating matches.

See complete report here

 

Project         Research         Hue       About