The Python 2018 toppers in the libraries categories are:
SciPy and NumPy:
NumPy is the foundational scientific-computational library in Data Science. Its precompiled numerical and mathematical routines combined with its ability to optimize data-structures make it ideal for computations with complex matrices and data arrays. SciPy is a linked library which aids NumPy and makes it applicable to functions like Fourier series and transformation, regression and minimization. SciPy follows the installation of NumPy.
The ML library Caffe is popular for applications like the visualization of data and computer vision. Switching between GPU and CPU is uncomplicated. Its speed in processing and self-tracking of code and models makes it the leader in speech, vision, multimedia apps, research projects and academic learning too.
This 2D Python-written plotting-library can generate with fewer code-lines a range of scatterplots, histograms, plots, bar-charts, power-spectra, error-charts and more in a variety of formats across multiple platforms and interacting environments. The uses of Maptolib in data sciences is very popular due to its usability with Python, Python scripts, IPython shell, web application servers, Jupyter Notebook, or any UI toolkits.
Keras written in Python is used with building interfaces for Neural Networks. The Keras API is for humans and emphasizes user experience. It is supported at the backend by CNTK, TensorFlow or Theano. It is useful for advanced and research apps because it can use individual stand-alone components like optimizers, neural layers, initialization sequences, cost functions, regularization and activation sequences for newer expressions and combinations.
The open-source library of Pandas has the ability to reshape structures in data and label tabular and series data for alignment automatically. It can find and fix missing data, work and save multiple formats of data, and provides labelling of heterogeneous data indexing. It is compatible with NumPy and can be used in various streams like statistics, engineering, social sciences, and finance.
The Scikit-learn module in Python integrates ML algorithms for both unsupervised and supervised medium-scale problems. Its API consistency, performance, documentation, and emphasis are on bringing ML to non-specialists in a ready simple high-level language. It is easy to adapt in production, commercial and academic enterprises because of its interface to the ML algorithms library.
Theano is used to define arrays in Data Science which allows optimization, definition, and evaluation of mathematical expressions and differentiation of symbols using GPUs. It is initially difficult to learn and differs from Python libraries running on Fortran and C. Theano can also run on GPUs thereby increasing speed and performance using parallel processing.
Tensor has a flexible architecture written in C and has features for binding while being deployed on GPUs, CPUs used for deep learning in neural networks. Being a second generation language its enhanced speed, performance and flexibility are excellent.
PyBrain is stacked with neural network algorithms that can deal with large dimensionality and continuous states. Its flexible algorithms are popular in research and since the algorithms are in the kernel they can be adapted using deep learning neural networks to any real-life tasks using reinforcement learning.
Shogun like the other Python libraries has the best features of semi-supervised, multi-task and large-scale learning, visualization and test frameworks; multi-class classification, one-time classification, regression, pre-processing, structured output learning, and built-in model selection strategies. It can be deployed on most OSs, is written in C and uses multiple kernel learning, testing and even supports binding to other ML libraries.
The most popular among Python libraries list is not exhaustive or in any order of popularity and will keep evolving. These are tools which should be put in the hands of an able developer with proficiency in statistical analysis its development, tools, and techniques working with a data scientist who can enable integration with the environment and application software.