Bodo is a new scalable analytics platform that offers automatic parallelization and optimization of Python analytics code with the addition of a simple decorator. I wanted to try Bodo and see how it performs on popular data analytics example benchmarks. The Monte Carlo approximation of Pi happens to be a popular example as well as one of the first examples demonstrating the Spark RDD API. Trying out Bodo on this benchmark on the occasion of Pi Day seemed suitable, and that is what this blog post is about.
The Monte Carlo Pi approximation basically considers a square with side = 1, and a circle inscribed within this square. The probability that a random point in the square is also in the circle is π/4. This probability can be estimated by selecting a large number of random points and then dividing the number of points within the circle by the total number of points.
Here’s a simple Numpy program, which implements this approximation:
*Figure 1. Monte Carlo Pi simulation code using numpy and using Bodo for parallel computation*
I saved the Numpy code shown above in a file called pi.py, and ran it on the terminal. There’s no special API required in Bodo to parallelize this function, unlike Spark. All I need to do is add a @bodo.jit decorator, and Bodo parallelizes and optimizes this function automatically. Figure 3 shows CPU utilization of the Numpy program using htop. It didn’t look bad to me, but that was before I added the @bodo.jit decorator. Now, this same code could run on multiple processors using mpiexec.
*Figure 2. Executing pi.py using mpiexec using 8 cores*
As we can see in Figure 3, now 8 cores are busy up to 100%. I tested this code on an AWS EC2 c5n.4xlarge instance with 8 physical cores running 2 threads per core, making 16 vCPUs. I also repeated this test on a MacBook Pro with 6 physical cores running 2 threads per core (12 CPUs). Let’s see if Bodo improved the run time.
Figure 3. CPU utilization in Monte Carlo pi calculation on AWS EC2 c5n.4xlarge (top) NumPy (bottom) Bodo with 8 cores, i.e., mpiexec -n 8 python pi.py
Figure 4 shows the run times for both the MacBook Pro and the AWS c5n.4xlarge instance. As shown, running with 2 cores cuts the run time in half, 3 cores bring it down to a third, and with 4 cores it takes about one-quarter of the original run time. My MacBook Pro was a bit slower because it was running other processes as well, but the EC2 instance had a fresh Linux image with nothing but Python and Bodo installed.
Note that Bodo is efficient, and it runs directly on physical cores; it doesn’t need physical multithreading to be more efficient. Consequently, if you look at the MacBook Pro run times in Figure 4, the run time for 6, 7, and 8 cores are the same.
Figure 4. Run time comparison for Monte Carlo calculation of Pi, using standard Numpy versus parallelized Numpy through Bodo
Bodo is a breakthrough in the field of Data Science and Data Engineering. Most developers in these fields use Python, but they struggle to scale their applications to process big data. Having to rewrite the Python application code to Spark to enable it to scale adds a layer of difficulty that often slows down the data science development process. With Bodo, Python code no longer needs to be converted to Scala or PySpark. It can stay as is or with just minor refactoring for type stability. My amazing experience with this benchmark made me want to put Bodo to the test with some more advanced benchmarks, and I will share those in future blog posts.
About the Author: Ali Reza Farhidzadeh is an Enterprise Artificial Intelligence Architect at Wipro Limited with 12 years of experience in data science, machine learning, business intelligence, and numerical computations. He is also a Former Professor of Probability and Statistics at the University of Buffalo.