Finding the Next James Anderson

Hello, welcome to today’s blog in which we will be scouting for the next James Anderson. Possibly. The ideas behind this blog are nothing new in fact I have stolen the idea from another sport. The Rangers report blog wrote a piece about scouting for the best youth players by using age-adjusted stats. This idea was based on Vollans work on ice hockey. I always like to say a lot of the best ideas are re-purposed from other areas! So how am I going to apply it? well, we are going to use the bowling stats from the second 11 country championship, age-adjust and see which bowler under the age of 21 looks to be the most promising.

bowlerdataAbove is the first 20 rows for the 350 bowlers we are going to be looking at. The top wickets is Nijjar from Essex, however, he is 24, therefore, will not be included overall. The first under 21 player on the top wicket list is 20-year-old mike for Leicestershire. So let’s adjust the data and see where we end up.

The first thing to do is all the players have bowled differing amounts of balls. I need to get all the players to the same level of balls. To do that, I took the strike weight for there balls actually bowled and extended this to a full season of say 1500 balls. In the table below the estwicket column is the number of wickets if the bowlers strike rate continued over a full 1500 balls.

ffff

However, a bowler who has taken a lot of wickets in a low amount of balls is unlikely to continue this rate to the end of a season. (Sorry Ben Coad). Regression to the mean is a widely accepted mathematical theory therefore when we are extrapolating performance we need to account for regression to the mean.

In the Rangers report blog, they applied 1% regression for every match they projected performance for. I can’t do that as my analysis is based on balls and in multi-day matches bowlers will bowl different amounts of balls.  Therefore I decided to apply the same 1% regression for every extrapolated 100 balls.

regreewick

The table to the side shows how much this affects the bowlers on the list, with most bowlers having a reduction in wickets.

Now the final thing is to adjust the total wickets based on the player’s age. The first part will be to filter for all players 21 and younger and then apply Volman’s age curve below

volmanage.png

The age curve is by year and month, however, I have only got the age in years, therefore, will just be using the numbers in the first column. The graph below shows the resulting results

bowlers

Based on this method Szymanski is the best bowler of the 2nd XI county championship. However, the large dot means we have to extrapolate a lot for Szymanski. Another interesting point is 3 of the top 5 estimated wicket takers are left-arm spinners. The England team is badly missing a reliable spinner particularly away from home could one of the 3 be the future England Spinner.

There lots more work that can be done with this data and look back historically at how these numbers can relate to future county championship averages. Also, apply a similar model to Batsmen which will be detailed in a future blog. Let me know your thoughts have you seen Szymanski bowl? Ideally, I would have prefered younger players but I think younger players play in the under 17 county championship.

 

 

Advertisements

Data Science Ethics and Societal Implications

Hello, I wanted to do something different today. Recently i applied to a data science masters at a local university and for the application, you needed to discuss the ethical and societal implications of an aspect of Data Science. Unfortunately, I didn’t get accepted on the course so I’m going to share it with you as I thought it was an interesting question.

“Information is the oil of the 21st century, and analytics is the combustion engine”. However, with this great power comes with great responsibility.

Data analytics encompasses everything from daily shopping habits, to application usage on a smartphone and, more significantly, to users’ Google search history. The data collected, generated and analysed can have huge political ramifications in determining election outcomes and, ultimately, party victories or losses.

Nonetheless, this begs the question of how ethical is such data harvesting. In particular, there are key ethical debates surrounding awareness and understanding of each and every individual who is targeted in the name of Data Science. An especially challenging issue of such collection analytics is that very often data is generated without an individual’s consent, knowledge or understanding. Practices- that if occurred in other fields- would be viewed as highly unethical.

Algorithms are now more and more prevalent throughout society. These can be extremely useful in some situations, from suggesting suitable music linked to a user’s profile, to presenting advertisements in line with a Google Search history. Nevertheless, if the algorithm goes wrong in these instances, you either end up listening to a song for 30 seconds before moving to the next one or see an advert for something you have no interest in. Furthermore, other algorithms, such as ones being used to decide on whether an individual gets parole, can have a far more sinister outcome. In order to be considered ethical and fair, these life-altering algorithms must, in my view, have some form of independent verification. This is to ensure they are just and free from discrimination to all ethnic groups, sexual orientations and political affirmations within society. The users of these algorithms should be thoroughly trained with the understanding that no decision the algorithm can produce can be 100%. In other words, there will always be an element of probability.

Significantly, the Cambridge Analytical scandal is one of the key ethical case studies within data science. The negative media coverage and public outcry from this scandal suggests that it is highly unethical to obtain an individual’s data from a public website and, subsequently, employ this sensitive information to target and direct political advertising and influence party voting. However, would such data harvesting still be perceived as unethical if it was employed in medical research? For example, what if the data had been utilised to identify people in the earliest stages of cancer and could, most positive of all, increase survival rates? Would there be such universal outrage and, perhaps, a more ethical stance taken if the data was to be employed for “good”?

Additionally, data science is also having an impact on society with the risk of improperly communicated data being used to reinforce a particular belief and, ultimately, as a weapon to heighten prejudice. This has a wide impact, particularly in general elections and could cause politicians to focus on the extremely misguided policies. This could lead to the requirement for an independent body within a society which will fact check and review all published data science work to ensure the public receive accurate information.

Biketown EDA – P2

Hello, welcome to the second part of this blog doing exploratory data analysis on the bike town dataset. If you haven’t read the first one then go check it out. An overview is we found that most of the records were either subscribers to the system of casual people who might just use it every now and then. So I decided to compare those two groups. We saw in the smaller sample size that most of the casual users used it for recreation and the subscribers used it mostly for commuting. Today we are going to look at the distances and speeds the groups travel and where they rent the bikes from but first we are going to look at when they rent the bikes:

plot8.png

This graph totally makes sense that subscribers tend to rent the bikes for commuting as there are two large spikes at around 8am and around 5pm. The peak hours for people going too and from work. Casual users don’t tend to use the system in the morning, however, there’s consistent usage throughout the afternoon. This could be tourists exploring the city for instance. It’s strange both groups don’t reach their minimum to well after 2am, are these people using the system to get home from their night outs? Beware of the drunk Portland cyclist!

distance

The density plot for the distances the two groups go is interesting however i think does fit the current pattern. Subscribers are looking to do shorter journeys because they might be covering the last few miles to work whereas the casual users are maybe exploring the city and therefore cover more distance.

speeds.png

Now let’s check out the speed curves for both groups the further the distance the person travelled the slower the speed. The subscribers are obviously generally the much faster riders at all distances. The increase in speed towards the 25-mile distance i think is down to lower amounts of data. At the lower distance the subscribers have significantly faster speeds are they using the system to get from A to B as quickly as possible.

start.png

Now let’s look at the start and end locations for the casual riders. The heat map above shows that most riders are clustered around the city centre possibly moving from one tourist spot to another. There seems to be a fairly even distribution across the centre locations.

subscriber.png

The subscriber heat map above shows that generally, people take the bike from much further out than compared to the casual riders who are possibly just using it to get to the tourist hot spots. Once again mostly the usage is on the left side of the river that must be the area where people like to get about. Also, the start locations around the outside are much denser, therefore, its clear people are using the system from further out to go into the centre of the city.

That’s it for this exploratory data analysis in this blog I think we have found some interesting insights and at the minimum able to confirm what you would expect. I hope you have found this interesting and informative let me know your thoughts and if you have looked at this dataset yourself lets see your thoughts. Check out the code on my GitHub should be linked somewhere on the website.

Nike Biketown – Exploratory Data Analysis

Hello welcome to today’s blog which we are going to take a large dataset and do some exploratory data analysis on it. I am going to look at the biketown dataset which was a dataset on tidy tuesday. If you’re new around here tidy Tuesday is a hashtag on twitter which the R for data science online learning community actively promotes and has every Tuesday. If you’re inspired to learn R and data science like I was that is a really great community full of wonderful people to start with. I am not going to post code snippets within the blog as I think it gets too long, however, the full code used will be posted on my GitHub.

structure

Above is the structure of the dataframe. The data comes in numerous csv files so i read them all in and created one large data frame structured like so. The second column Payment plan seems an interesting column it has 3 constituents Casual, subscriber and another. The system in Portland has a way a regular user can automate payments to save time. Let’s look at how much of the dataset is based on the 3 types of payment:

plot1

As you can see the vast majority of this dataset is based on either casual and subscriber and I think it would be interesting to review the differences between the people on the two main payment plans.  Therefore going forward in the EDA we are going to remove the entries without a subscriber. After this, we could possibly look at a method for working out what type of payment plan the blanks are. First thing first let’s have a look at what type of trips either group takes:

plot5plot2

The big issue here with making any conclusion on the type of trips each group takes is going to be difficult. This is because each group has over 200 thousand entries and there are less than 1000 recorded trips for each group. What we can say is that it makes sense that subscribers in this smaller group tend to use the system for commuting and casual users clearly in the small sample size use the system for recreation which makes total sense.

plot6

Now we look at the payment methods that both groups have used. By far the 3 main payment types for both groups are keypad, mobile and keypad_rfid_card, with subtle differences between the 2 groups. The RFID card is clearly higher in the subscriber which must be because subscribers are given a card in order to gain access to the bikes. Also, casual users tend to be much more likely to use their mobile to gain access. Both groups have the vast majority using the keypad system.

That’s it for part 1 of today’s exploratory data analysis on the bike town data. Tomorrow we will look at the distances the groups go as well as location and usage time. Let me know your comments on the first part.

F1 Circuit Cluster Analysis —- Part 1

Hello there so as you know I’m currently working through the Datacamp course data scientist with R. (If the people from Datacamp are reading this I’m open top sponsorship!) There will be a further update how I’m getting on with this later this week, however, today I wanted to focus on applying something new that I learnt. Cluster analysis. Cluster analysis allows you to take a dataframe of two variables and calculate which are the rows best grouped together. There are two main methods that we are going to look at hierarchical clustering and kmeans clustering. We are going to look at formula  1 circuits. The idea is there are 21 different circuits currently on the calender all different lengths and height profiles and types of tarmac, however, can we group them together with certain characteristics. For me as an avid formula 1 watcher, the differences between the circuits are caused by lengths of straights and speed of corners. Therefore the two metrics we are using are the average straight length and average corner speed.

  • Average straight length – calculated by measuring each stretch of track which the F1 car would be running full throttle. Removing any lengths of the track less than 100m. an example for Spain below straights is estimated in green.

circuit.png

  • Average corner speed – I have calculated this by allocating each corner to either slow, medium or fast speed. (Unfortunately, I don’t have data for the exact corner speed but if any f1 team wants to send it over email me!) so you can see below in the table how many for each circuit was allocated 

table2

As I don’t know the exact speeds of these corners I have estimated that a slow corner is 80 km/h, medium speed corner is 150 km/h and a fast corner is 200km/h. This has left us with the following table:

table3

Hierarchical Clustering

The first thing we are going to look at is hierarchical clustering. The table above is fed into the following code:

carbon-4-e1530090645312.png

this produces the following output:

dendrogramf1.png

We have 5 different distinct clusters that the F1 circuits fit into. It’s not too surprising that Singapore, Monaco and Hungary fit into a similar cluster as well as Belgium and Great Britain being similar circuits.

cluster scatter

The scatter above you can clearly see the difference between the two main clusters 1 and 2. In cluster one straight length are often shorter, however, the corners are faster. Cluster 2 circuits often have longer straights but slower corners. With a few circuits from each group used so far, this season would be interesting to see if there are any trends with car speed.  That’s it for the first part of this series next week we will look at any difference using K-means clustering. In the final part, we will look applying what we have seen so far this year to try and predict who will win in the later rounds.