Five more tools and techniques for better plotting

Gonzalo Volpi Gonzalo Volpi
February 14, 2020 Big Data, Cloud & DevOps

And getting the most out of your data

In real-life data science, plotting does matter. In my day-to-day life, I spend more time plotting and analysing those charts, than doing anything else. Let me explain myself, I work at Ravelin Technology. Our business is data, and specifically, analyzing and predicting fraud for online merchants. The main product of the company uses a combination of machine learning, network analysis, rules and human insights for predicting if a transaction might or might not be fraud. We have an ad-hoc machine learning model for each one of our clients, but the building of that model is something that happens at the beginning of the relationship with them and then it mostly requires maintenance.

Maintenance how? Sometimes it’s for introducing new features or because of behavioural changes in customers. However, it can also be the case that something changes in the data we receive. Or perhaps there’s just something we were originally missing for not having enough data when we built the model for the first time. It can also happen that either the client or us spot some dodgy performance in our predictions in, for example, one specific country. Whatever is the case, there’s usually an extensive investigation to find out what’s the problem and/or what could we do better. And just to give a bit more of context, analyzing the performance of a model for us usually implies dataset with millions of rows and thousands of columns. This can only be addressed by plotting. It’s almost impossible to find patterns or insights just by looking at the data. Plotting allows us to compare features’ performance, see the evolution through time, distribution of values, differences in mean and median values, etc., etc., etc., etc.

As I said in my previous story, in our field we must equally weight the importance of explainability and interpretability. Real-life Data Science never finds you working alone on a project and your workmates and/or clients usually won’t know much about the data you’ll be using. Being able to explain your thinking process is a key part of any data-related job. That’s why copying and pasting are not enough and charts personalization becomes key.

Today we’ll go through 5 techniques to make better charts that I’ve found useful in the past. Some of them are day-to-day tools, while others you’ll use them every now and then. But having this story at hand, hopefully, will come in handy when the moment arrives. The libraries we’ll be using are:

import matplotlib.pyplot as pltimport seaborn as sns

With the following style and configurations:

plt.style.use(‘fivethirtyeight’)%config InlineBackend.figure_format = ‘retina’%matplotlib inline

1. Change range and steps in axis

The default configuration of matplotlib or seaborn for setting up the range and steps it’s usually good enough for visualizing out data, but sometimes we’ll want to see all the steps in our axis explicitly shown. Or perhaps, something I’ve found useful is drawing all the data but including the axis labels just for a specific range of the y or x-axis.

For example, let’s say we’re plotting the distribution of our model’s predictions and we want to concentrate ourself in the values in between 30 and 50, with a step every two units, and without losing sight of the rest of the values. Our original seabon’s ‘distplot’ would be like:

We have now two options for accomplishing the idea above:

ax.set_xticks(range(30, 51, 2))ax.xaxis.set_ticks(np.arange(30, 51, 2))

In both cases, we need to specify the starting point, ending point and step. Mind how the ending point follows a ‘less than’ kind of logic instead of ‘equal to or less than’. The result would be the following chart:

Also, mind how I’m calling both options from the ‘ax’ object, given that’s the default. When we create any kind of chart the axis (‘ax’) and a figure (‘fig’) are automatically created. We can also do the same the following way:

myplot = sns.distplot(mydata)myplot.set_xticks(range(30,51,2))

2. Rotate ticks

This is an easy one but very very useful tip if, for example, we’re dealing with text labels instead of numbers. We can do this just by using the ‘rotation’ hyperparameter in ‘set_xticklabels’:

ax.set_xticklabels(labels=my_labels, rotation=90)

Note how I’m also passing ‘my_labels’ to the ‘labels’ hyperparameter since that’s mandatory when using ‘xticklabels’. However, if you’re drawing a ‘distplot’, you can simply pass the range of values to be shown, while for any other chart, you can pass exactly the same array you specified for the x-axis. Also, you can combine this with the first technique like this:

range_step = np.arange(30, 51, 2)ax.xaxis.set_ticks(range_step)ax.set_xticklabels(labels=range_step, rotation=90);

Obtaining the following result:

3. Change the space in between plots

More often than not, we’ll want to plot several charts at once to compare their results, visualize them all together, or perhaps just to save time and/or space. In any case, we can do that by using ‘subplots’ in a very simple way:

fig, ax = plt.subplots(figsize=(18,10), nrows=2)

We specified two rows, and therefore we’ll be plotting two charts:

sns.distplot(mydata, ax=ax[0])sns.lineplot(x=mydata[‘xaxis’], y=mydata[‘yaxis’], ax=ax[1])

Now, sometimes, instead of having only two charts, we might have more. And perhaps we need to include titles for all of them. We may also have some charts with text labels that will need to rotate them for better readability. In cases like this, we could end up with some overlap between plots and increasing the space between charts could help us to visualize them better.

plt.subplots_adjust(hspace = 0.8)

Mind how the hight of the figure remains the same (10 in this case), but the space in between charts increases. If you want to maintain your charts’ size, you’d have to increase your figure size through the ‘figsize’ hyperparameter.

Also, the hyperparameter ‘hspace’ follows the horizontal space. If you were drawing multiple columns instead of rows, you could accomplish the same by using the hyperparameter ‘vspace’.

By the way, if you want to found out how to set titles for your charts, you can find that tip and some others for better plotting in my previous story.

NOTE: just like I specified two rows to be drawn above, you could also specify a fixed number of columns. In that case, the indexing of the charts would follow a two indexes logic like ax=[0,1].

4. Customize your confusion matrix

Unfortunately, this is not the space for explaining in depth how the confusion matrix works or what it is useful for. Nonetheless, if you fancy learning more about it, I always recommend this story from M. Sunasra.

Now, if you’re already familiar with the concept, you might have encountered in the past that sometimes the default heatmap created by ‘plot_confusion_matrix’ from ‘sklearn.metrics’ library, comes out with the upper and lower squares cut off, like the following picture:

source: https://gis.stackexchange.com/

We can solve this by plotting our own confusion matrix from scratch using just a bunch of lines. For example:

fig = plt.figure(figsize=(12,10))

cm = skplt.metrics.confusion_matrix(real_y, pred_y)

labels=[0,1,2,3,4]

ax = sns.heatmap(cm, annot=True,annot_kws={“size”:12}, fmt=’g’, cmap=”Blues”, xticklabels=labels, yticklabels=labels)

bottom, top = ax.get_ylim()

ax.set_ylim(bottom + 0.5, top — 0.5)

ax.set(ylabel=’True label’)

ax.set(xlabel=’Predicted label’)

plt.show()

What we’re doing here is:

  1. We create an empty figure. Wider than taller since we’ll have the annotations to the right of the heatmap
  2. We get the values of our confusion matrix through ‘skplt.metrics.confusion_matrix’
  3. We specified the ‘labels’ according to the number of categories we have
  4. We create a heatmap using the values from point 2 and specifying: i) ‘annotations’ equal True, ii) ‘annot_kws’ for specifying the font size of the annotations (12 in this case), iii) ‘fmt’ for the passing the string formatting code, iv) ‘cmap’ for the colour pattern, v) And finally we specify the labels for both axis (in a confusion matrix both of them are the same)
  5. We get the y-axis view limits and we set the again +- 0.5
  6. Last step: set the y and ‘xlabels’ to true and predicted label respectively

The result should be something like this:

5. Plot accumulative distributions

Surely I don’t need to size how useful can be plotting accumulative distributions, either for better understanding the percentage of elements up to certain value or for comparing two different groups within our data.

You can easily get this kind of charts through Seaborn’s ‘distplot’ chart itself just by setting the following:

sns.distplot(my_data, label=’my label’, color=’red’, hist_kws=dict(cumulative=True))

We can make the chart look better by setting the limits for the x-axis:

sns.distplot(my_data, label=’my label’, color=’red’, hist_kws=dict(cumulative=True)).set(xlim=(0, my_data.max()))

As I said at the beginning of the story, some of these tools or tips I use them all the time, while some others only every now and then. But hopefully, knowing these quick fixes and techniques will help you to make better plots and to better understand your data itself.

  • Experfy Insights

    Top articles, research, podcasts, webinars and more delivered to you monthly.

  • Gonzalo Volpi

    Tags
    Data Science
    Leave a Comment
    Next Post
    Tips and Tricks for Fast Data Analysis in Python

    Tips and Tricks for Fast Data Analysis in Python

    Leave a Reply Cancel reply

    Your email address will not be published. Required fields are marked *

    More in Big Data, Cloud & DevOps
    Big Data, Cloud & DevOps
    Cognitive Load Of Being On Call: 6 Tips To Address It

    If you’ve ever been on call, you’ve probably experienced the pain of being woken up at 4 a.m., unactionable alerts, alerts going to the wrong team, and other unfortunate events. But, there’s an aspect of being on call that is less talked about, but even more ubiquitous – the cognitive load. “Cognitive load” has perhaps

    5 MINUTES READ Continue Reading »
    Big Data, Cloud & DevOps
    How To Refine 360 Customer View With Next Generation Data Matching

    Knowing your customer in the digital age Want to know more about your customers? About their demographics, personal choices, and preferable buying journey? Who do you think is the best source for such insights? You’re right. The customer. But, in a fast-paced world, it is almost impossible to extract all relevant information about a customer

    4 MINUTES READ Continue Reading »
    Big Data, Cloud & DevOps
    3 Ways Businesses Can Use Cloud Computing To The Fullest

    Cloud computing is the anytime, anywhere delivery of IT services like compute, storage, networking, and application software over the internet to end-users. The underlying physical resources, as well as processes, are masked to the end-user, who accesses only the files and apps they want. Companies (usually) pay for only the cloud computing services they use,

    7 MINUTES READ Continue Reading »

    About Us

    Incubated in Harvard Innovation Lab, Experfy specializes in pipelining and deploying the world's best AI and engineering talent at breakneck speed, with exceptional focus on quality and compliance. Enterprises and governments also leverage our award-winning SaaS platform to build their own customized future of work solutions such as talent clouds.

    Join Us At

    Contact Us

    1700 West Park Drive, Suite 190
    Westborough, MA 01581

    Email: support@experfy.com

    Toll Free: (844) EXPERFY or
    (844) 397-3739

    © 2023, Experfy Inc. All rights reserved.