Introduction to Python

Size: px
Start display at page:

Download "Introduction to Python"

Transcription

1 Introduction to Python Part 3: Advanced Topics Michael Kraus Max-Planck-Institut für Plasmaphysik, Garching 7. March 2012

2 Some Advanced Topics calling and embedding C and Fortran code: weave: inline C/C++ code, translation of Python code to C++ SWIG: wrap C/C++ code f2py: wrap Fortran code ctypes, cython: call functions in C libraries GUI programming with PyQt and PySide parallelisation: threading multiprocessing parallel python (PP) mpi4py symbolic computing with Sage

3 Calling and Embedding C and Fortran Code in Python Python is very fast when writing code, but not necessarily so fast when executing code (especially for numerical applications) implement time-consuming parts of your program in C/C++/Fortran lots of existing library routines in C/C++/Fortran reuse in Python by wrapping the corresponding libraries or source files, making them appear as Python modules couple or extend existing codes with Python combine different modules or codes to larger programs use Python as (graphical) user interface, for visualisation or debugging

4 Sample Problem: Laplace Equation solving the 2D Laplace equation using an iterative finite difference scheme (four point averaging, Gauss-Seidel or Gauss-Jordan) solve for some unknown function u(x, y) such that 2 u = 0 with some boundary condition specified discretise the domain into an (n x n y ) grid of points the function u can be represented as a two-dimensional array u(n x, n y ) the values of u along the sides of the domain are given (and stay fixed) the solution can be obtained by iterating in the following manner: for i in range (1, nx -1): for j in range (1, ny -1): u[i,j] = ( (u[i -1, j] + u[i+1, j ])* dy **2 + \ (u[i, j -1] + u[i, j +1])* dx **2 \ ) / (2.0*( dx **2 + dy **2)) in pure Python this is REALLY slow

5 Sample Problem: Laplace Equation in NumPy the for loop of the Laplace solver can be readily expressed by a much simpler NumPy expression: u[1: -1, 1: -1] = ( (u[0: -2, 1: -1] + u[2:, 1: -1])* dy **2 + \ (u[1: -1, 0: -2] + u[1: -1, 2:])* dx **2 \ ) / (2.0*( dx **2 + dy **2)) the advantage of this expression is that it is completely done in C speedup of a factor of 50x over the pure Python loop (another factor of 5 or so if you link NumPy with Intel MKL or ATLAS) (slight) drawback: this expression uses temporary arrays during one iteration, the computed values at an already computed location will not be used in the original for loop, once the value of u[1,1] is computed, the next value for u[1,2] will use the newly computed u[1,1] and not the old one since the NumPy expression uses temporary arrays internally, only the old value of u[1,1] will be used the algorithm will still converge but in twice as much time reduction of the benefit by a factor of 2

6 Weave Weave is a subpackage of SciPy and has two modes of operation weave.blitz accelerates Python code by translating it to C++ code which it compiles into a Python module weave.inline allows to embed C/C++ code directly into Python code mainly used to speed up calculations on arrays fast/efficient: directly operates on NumPy arrays (no temporary copies) the first time you run a blitz or inline function, it gets compiled into a Python module, the next time it is called, it will run immediately References:

7 Weave: Laplace Equation in weave.blitz to use weave.blitz, the accelerated code has to be put into a string which is passed to the weave.blitz function: from scipy import weave expr = """ u[1: -1, 1: -1] = ( (u[0: -2, 1: -1] + u[2:, 1: -1])* dy **2 + \ (u[1: -1, 0: -2] + u[1: -1, 2:])* dx **2 \ ) / (2.0*( dx **2 + dy **2)) """ weave. blitz (expr, check_size =0) the first time the code is called, weave.blitz converts the NumPy expression into C++ code, builds a Python module, and invokes it for the array expressions, weave.blitz uses Blitz++ speedup of x over the Python loop weave.blitz does not use temporary arrays for the computation (the computed values are re-used immediately) and therefore behaves more like the original for loop

8 Weave: Laplace Equation in weave.inline in weave.inline the C/C++ code has to be put into a string which is passed to the weave.inline function, together with the variables used: from scipy. weave import converters, inline code = """ for ( int i =1; i<nx -1; ++i) { for ( int j =1; j<ny -1; ++j) { u(i,j) = ( (u(i -1,j) + u(i+1,j ))* dy*dy + (u(i,j -1) + u(i,j +1))* dx*dx ) / (2.0*( dx*dx + dy*dy )); } } """ inline (code, [ u, dx, dy, nx, ny ], type_ converters = converters. blitz, compiler = gcc ) here we use Blitz++ arrays (speedup x over the Python loop) with pointer arithmetic, you can get an additional speedup of a factor 2 weave.inline does not use temporary arrays for the computation

9 SWIG connects programs written in C/C++ with a variety of high-level programming languages (e.g. Python) parses C/C++ interfaces, takes the declarations found in header files, and uses them to generate wrapper code required for Python to call into the C/C++ code control of C/C++ applications (GUI, visualisation) testing and debugging glue together different C/C++ modules or codes reuse existing functions or libraries in Python (numerical methods, data analysis) References:

10 SWIG example: square function in C swig example.h # ifndef _ SWIG_ EXAMPLE_ H_ # define _ SWIG_ EXAMPLE_ H_ int square ( int ); # endif // _SWIG_EXAMPLE_H_ swig example.c # include " swig_example.h" int square ( int x) { return x* x; } in order to call the functions in swig_example.c from Python, you need to write an interface file which is the input to SWIG SWIG generates a wrapper that looks like a normal Python module

11 SWIG SWIG interface file for the example: swig example.i % module swig_ example %{ # include " swig_example.h" %} % include " swig_example.h" contains C/C++ declarations and special SWIG directives %module defines a module name code between %{ and %} is copied verbatim to the resulting wrapper file used to include header files and other declarations required to make the generated wrapper code compile below the module, declarations of C/C++ functions to be included in the Python module are listed declarations included in a SWIG input file to not automatically appear in the generated wrapper code

12 SWIG call SWIG to generate wrapper code (swig_example_wrap.c) and Python module (swig_example.py): > swig - python swig_ example. i compile the wrapper code into a shared library: gcc python - config -- cflags -c swig_ example_ wrap. c \ swig_example.c gcc python - config -- ldflags - shared -o _ swig_ example. so \ swig_example_wrap.o swig_example.o important: the library name has to start with _ import the module from Python and call its functions: >>> import swig_ example as se >>> se. square (2) 4

13 SWIG NumPy arrays need some more care we need typemaps to convert between C arrays and NumPy arrays they generate additional code that takes care of correctly translating between C and Python objects NumPy provides all you need in a SWIG include file (numpy.i) Laplace solver in C: laplace.h # ifndef _ LAPLACE_ H_ # define _ LAPLACE_ H_ void laplace ( double * u, int nx, int ny, \ double dx, double dy ); # endif // _LAPLACE_H_

14 SWIG Laplace solver in C: laplace.c # include " laplace.h" void laplace ( double * u, int nx, int ny, \ double dx, double dy) { double dx2 = dx* dx; double dy2 = dy* dy; double d2inv = 0.5 / ( dx2 + dy2 ); } for ( int i =1; i<nx -1; i ++) { for ( int j =1; j<ny -1; j ++) { *(u+i*nx+j) = ( ( *(u+(i -1)* nx+j) + *(u+(i +1)* nx+j) ) * dy2 + ( *(u+i*nx +(j -1)) + *(u+i*nx +(j +1)) ) * dx2 ) * d2inv ; } }

15 SWIG SWIG interface file for the Laplace solver: laplace.i % module laplace_ swig %{ # define SWIG_ FILE_ WITH_ INIT # include " laplace.h" %} % include " numpy.i" % init %{ import_array (); %} % apply ( double * INPLACE_ARRAY2, int DIM1, int DIM2 ) {( double * u, int nx, int ny )}; % include " laplace.h" call SWIG to generate wrapper code (laplace_wrap.c, laplace.py): > swig -I$( NUMPY_SWIG_DIR ) - python laplace.i you have to specify the directory containing numpy.i (NUMPY SWIG DIR)

16 SWIG compile the wrapper code into a shared library: gcc python - config --cflags -I$( NUMPY_INCLUDE_DIR ) \ -c laplace_wrap.c laplace.c gcc python - config -- ldflags - shared -o _ laplace_ swig. so \ laplace_wrap.o laplace.o you have to add the NumPy include directory (NUMPY INCLUDE DIR) import the module from Python and call its functions: >>> import laplace_ swig >>> dx = dy = 0.1 >>> u = np. zeros ( (100,100) ) >>> u [0] = 1. >>> laplace_swig. laplace (u, dx, dy) (the python27/numpy/1.6.1 module on the cluster provides environment variables NUMPY INCLUDE DIR and NUMPY SWIG DIR)

17 f2py f2py is a NumPy module that lets you easily call Fortran functions from Python the f2py command builds a shared library and creates Python wrapper code that makes the Fortran routine look like a native Python module: > f2py -c laplace. f90 -m laplace_ fortran in Python you only have to import it like every other Python module: >>> import laplace_ fortran >>> laplace_ fortran. laplace (...) when passing arrays, f2py automatically takes care of the right layout, i.e. row-major in Python and C vs. column-major in Fortran References:

18 f2py in the Fortran routine you have to include some additional directives telling f2py the intent of the parameters: subroutine laplace (u, n, m, dx, dy) real *8, dimension (1:n,1: m) :: u real *8 :: dx, dy integer :: n, m, i, j! f2py intent (in, out ) :: u! f2py intent (in) :: dx,dy! f2py intent ( hide ) :: n,m end... subroutine the dimensions of the array are passed implicitly by Python: >>> dx = dy = 0.1 >>> u = np. zeros ( (100,100) ) >>> u [0] = 1. >>> laplace_fortran. laplace (u, dx, dy)

19 Sample Problem: Laplace Solver with f2py laplace.f90 subroutine laplace (u, nx, ny, dx, dy) real *8, dimension (1: nx,1: ny) :: u real *8 :: dx, dy integer :: nx, ny, i, j! f2py intent (in, out ) :: u! f2py intent (in) :: dx, dy! f2py intent ( hide ) :: nx, ny do i=2, nx -1 do j=2, ny -1 u(i,j) = ( (u(i -1,j) + u(i+1,j ))* dy*dy + (u(i,j -1) + u(i,j +1))* dx*dx enddo enddo ) / (2.0*( dx*dx + dy*dy )) end subroutine

20 External Libraries: ctypes the ctypes module of the Python Standard Library provides C compatible data types and allows calling functions in shared libraries ctypes can be used to wrap these libraries in pure Python to use ctypes to access C code you need to know some details about the underlying C library (names, calling arguments, types, etc.), but you do not have to write C extension wrapper code or compile anything with a C compiler (like in Cython) simple example: libc.rand() >>> import ctypes >>> libc = ctypes. CDLL ("/ usr / lib / libc.so") >>> libc. rand () ctypes provides functionality to take care of correct datatype handling, automatic type casting, passing values by reference, pointers, etc. Reference:

21 External Libraries: Cython Cython provides declarations for many functions from the standard C library, e.g. for the C math library: cython cmath.py from libc. math cimport sin cdef double f( double x): return sin (x*x) calling C s sin() functions is substantially faster than Python s math.sin() function as there s no wrapping of arguments, etc. the math library is not linked by default in addition to cimporting the declarations, you must configure your build system to link against the shared library m, e.g. in setup.py: ext_modules = [ Extension (" cython_cmath ", [" cython_cmath. pyx "], libraries =["m" ])]

22 External Libraries: Cython if you want to access C code for which Cython does not provide a ready to use declaration, you must declare them yourself: cdef extern from " math. h": double sin ( double ) this declares the sin() function in a way that makes it available to Cython code and instructs Cython to generate C code that includes the math.h header file the C compiler will see the original declaration in math.h at compile time, but Cython does not parse math.h and thus requires a separate definition you can declare and call into any C library as long as the module that Cython generates is properly linked against the shared or static library

23 Performance Python Benchmark Reloaded 2D Laplace Solver: 500x500 grid, 100 iterations Type of Solution Time GNU (ms) Time Intel (ms) Numpy Weave (Blitz) Weave (Inline) Cython Cython (fast) Cython (2 threads) Cython (4 threads) SWIG ctypes f2py Pure C Pure Fortran (Benchmarked on an Intel Xeon 2.83GHz)

24 GUI Programming lots of options: Tkinter: Python s default GUI toolkit included in the Standard Library wxpython: Python wrapper for wxwidgets PyGTK: Python wrapper for GTK PyQt and PySide: Python wrappers for Qt (not just a GUI library!) Traits and TraitsUI: development model that comes with automatically created user interfaces

25 PyQt and PySide PyQt and PySide are Python bindings for the Qt application framework run on all platforms supported by Qt (Linux, MacOSX, Windows) the interface of both modules is almost identical (PySide is slightly cleaner, see wiki/differences_between_pyside_and_pyqt) main difference: License (PyQt: GPL and commercial, PySide: LGPL) and PySide is supported by Nokia (who develops Qt) generate Python code from Qt Designer add new GUI controls written in Python to Qt Designer Documentation:

26 PyQt and PySide: Simple Example simple example: only shows a small window necessary imports: basic GUI widgets are located in the QtGui module from PySide import QtGui every PySide application must create an application object, the sys.argv parameter is a list of arguments from the command line: app = QtGui. QApplication ( sys. argv ) QtGui.QWidget is the base class of all user interface objects in PySide, the default constructor has no parent and creates a window: w = QtGui. QWidget () resize the window, move it around on the screen, and set a title: w. resize (250, 150) w. move (300, 300) w. setwindowtitle ( Simple Example )

27 PyQt and PySide: Simple Example make the window visible: w. show () finally, enter the main loop of the application: sys. exit ( app. exec_ ()) the event handling starts from this point: the main loop receives events from the window system and dispatches them to the application widgets the main loop ends, if we call the exit() method or the main widget is destroyed (e.g. by clicking the little x on top of the window) the sys.exit() method ensures a clean exit

28 PyQt and PySide: Simple Example pyside simple example.py import sys from PySide import QtGui def main (): app = QtGui. QApplication ( sys. argv ) w = QtGui. QWidget () w. resize (250, 150) w. move (300, 300) w. setwindowtitle ( Simple Example ) w. show () sys. exit ( app. exec_ ()) if name == main : main () we can do a lot with this window: resize it, maximise it, minimise it

29 PyQt and PySide: Simple Example you could also set an application icon: w. setwindowicon ( QtGui. QIcon ( my_app. png )) the QtGui.QIcon is initialised by providing it with a (path and) filename you can move and resize at once: w. setgeometry (300, 300, 250, 150) the first two parameters are the x and y positions of the window the latter two parameters are the width and height of the window

30 PyQt and PySide: Widgets this was a procedural example, but in PySide you re writing objects: create a new class called Example that inherits from QtGui.QWidget: class Example ( QtGui. QWidget ): we must call two constructors: for the Example class and for the inherited class: def init ( self ): super ( Example, self ). init () self. initui () the super() method returns the parent object of the Example class the constructor method is always called init () in Python the creation of the GUI is delegated to the initui() method:

31 PyQt and PySide: Widgets pyside object example.py class Example ( QtGui. QWidget ): def init ( self ): super ( Example, self ). init () self. initui () def initui ( self ): self. setgeometry (300, 300, 250, 150) self. setwindowtitle ( PySide Object Example ) self. setwindowicon ( QtGui. QIcon ( my_app. png )) self. show () def main (): app = QtGui. QApplication ( sys. argv ) ex = Example () sys. exit ( app. exec_ ()) if name == main : main ()

32 PyQt and PySide: Widgets our Example class inherits lots of methods from the QtGui.QWidget class: self. setgeometry (300, 300, 250, 150) self. setwindowtitle ( PySide Object Example ) self. setwindowicon ( QtGui. QIcon ( my_app. png ))

33 PyQt and PySide: Buttons and Tooltips add a button and some tooltips: QtGui. QToolTip. setfont ( QtGui. QFont ( SansSerif, 10)) self. settooltip ( This is a <b> QWidget </b> widget ) btn = QtGui. QPushButton ( Button, self ) btn. settooltip ( This is a <b> QPushButton </b> widget ) btn. resize ( btn. sizehint ()) btn. move (50, 50)

34 PyQt and PySide: Buttons and Tooltips QtGui provides static methods to set default properties like fonts: QtGui. QToolTip. setfont ( QtGui. QFont ( SansSerif, 10)) set a tooltip for our Example class: self. settooltip ( This is a <b> QWidget </b> widget ) create a button which is placed within our Example class main widget: btn = QtGui. QPushButton ( Button, self ) set a tooltip for the button, resize it and move it somewhere: btn. settooltip ( This is a <b> QPushButton </b> widget ) btn. resize ( btn. sizehint ()) btn. move (50, 50) GUI elements can give a size hint corresponding to their content (e.g. button text, picture size)

35 PyQt and PySide: Signals and Slots bring the button to life by connecting it to a slot: btn = QtGui. QPushButton ( Quit, self ) btn. clicked. connect ( QtCore. QCoreApplication. instance (). quit ) btn. resize ( qbtn. sizehint ()) btn. move (50, 50) now we can close our window programatically (not only by clicking x) you have to import QtCore for this to work: from PyQt4 import QtCore the event processing system in PySide uses the signal & slot mechanism if we click on the button, the signal clicked is emitted it can be connected to any Qt slot or any Python function QtCore.QCoreApplication is created with the QtGui.QApplication it contains the main event loop and processes and dispatches all events it s instance() method returns its current instance the quit() method terminates the application

36 PyQt and PySide: QLineEdit and QMessageBox add a QtGui.QLineEdit where the user can enter some text that is displayed in a popup window when he clicks the OK button: def initui ( self ):... self. inputle = QtGui. QLineEdit ( self ) self. inputle. resize (120,20) self. inputle. move (10,50) okbtn = QtGui. QPushButton ( OK, self ) okbtn. clicked. connect ( self. showmessage ) okbtn. resize ( okbtn. sizehint ()) okbtn. move (150, 50)... we also have to define a function that serves as slot: def showmessage ( self ): QtGui. QMessageBox. information (self, " Information ", self. inputle. text ())

37 PyQt and PySide: QLineEdit and QMessageBox the QtGui.QLineEdit has to be an element of the class so that the slot can access it self. inputle = QtGui. QLineEdit ( self ) the OK button is connected to the method showmessage: okbtn. clicked. connect ( self. showmessage ) showmessage reads the content of the QtGui.QLineEdit via it s text() method and creates a QtGui.QMessageBox: QtGui. QMessageBox. information (self, " Information ", self. inputle. text ()) the title of the QtGui.QMessageBox is set to "Information" the message it displays is the content of the QtGui.QLineEdit

38 PyQt and PySide: QLineEdit and QMessageBox that s how our QLineEdit example looks like: the left window looks a little messy......let s add some intelligent layout management!

39 PyQt and PySide: Layout Management absolute positioning, i.e. specifying the position and size of each widget in pixels, is not very practical the size and the position of a widget do not change, if you resize a window changing fonts in your application might spoil the layout if you decide to change your layout, you must completely redo your layout use layout classes instead: QtGui.QHBoxLayout, QtGui.QVBoxLayout, QtGui.QGridLayout line up widgets horizontally, vertically or in a grid add stretches that adapt when the window is resized example: place two buttons in the right bottom corner use one horizontal box, one vertical box, and stretch factors

40 PyQt and PySide: Layout Management create two push buttons: okbutton = QtGui. QPushButton ("OK") cancelbutton = QtGui. QPushButton (" Cancel ") create a horizontal box layout, add a stretch factor and the buttons: hbox = QtGui. QHBoxLayout () hbox. addstretch (1) hbox. addwidget ( okbutton ) hbox. addwidget ( cancelbutton ) put the horizontal layout into a vertical one vbox = QtGui. QVBoxLayout () vbox. addstretch (1) vbox. addlayout ( hbox ) set the main layout of the window self. setlayout ( vbox )

41 PyQt and PySide: Layout Management def initui ( self ): self. setgeometry (300, 300, 250, 150) self. setwindowtitle ( PySide Layout Example ) self. setwindowicon ( QtGui. QIcon ( my_app. png )) okbutton = QtGui. QPushButton ("OK") cancelbutton = QtGui. QPushButton (" Cancel ") hbox = QtGui. QHBoxLayout () hbox. addstretch (1) hbox. addwidget ( okbutton ) hbox. addwidget ( cancelbutton ) vbox = QtGui. QVBoxLayout () vbox. addstretch (1) vbox. addlayout ( hbox ) self. setlayout ( vbox ) self. show ()

42 PyQt and PySide: Layout Management

43 PyQt and PySide: Layout Management for the QLineEdit example we use a grid layout: create widgets: self. inputle = QtGui. QLineEdit ( self ) okbtn = QtGui. QPushButton ( OK, self ) qtbtn = QtGui. QPushButton ( Quit, self ) create grid layout: grid = QtGui. QGridLayout () add widgets with addwidget(widget, row, column): grid. addwidget ( self. inputle, 0, 1) grid. addwidget ( okbtn, 0, 2) grid. addwidget ( qtbtn, 2, 2) set the stretch factor of the second row to 1 grid. setrowstretch (1, 1)

44 PyQt and PySide: Layout Management def initui ( self ):... self. inputle = QtGui. QLineEdit ( self ) qtbtn = QtGui. QPushButton ( Quit, self ) okbtn = QtGui. QPushButton ( OK, self ) app = QtCore. QCoreApplication. instance () qtbtn. clicked. connect ( app. quit ) okbtn. clicked. connect ( self. showmessage ) grid = QtGui. QGridLayout () grid. addwidget ( self. inputle, 0, 1) grid. addwidget ( okbtn, 0, 2) grid. addwidget ( qtbtn, 2, 2) grid. setrowstretch (1, 1) self. setlayout ( grid ) self. show ()

45 PyQt and PySide: Layout Management

46 PyQt and PySide: Matplotlib we want to embed a Matplotlib Figure into a QtWidget the Figure object is the backend-independent representation of our plot: from matplotlib. figure import Figure the FigureCanvasQTAgg object is the backend-dependent figure canvas: from matplotlib. backends. backend_ qt4agg \ import FigureCanvasQTAgg as FigureCanvas renders the Figure we re about to draw to the Qt4 backend the FigureCanvasQTAgg is a Matplotlib class as well as a QWidget we can create a new widget by deriving from it: class Qt4MplCanvas ( FigureCanvas ):... this class will render our Matplotlib plot

47 PyQt and PySide: Matplotlib the init method contains the code to draw the graph: def init ( self ): self.x = np. arange (0.0, 3.0, 0.01) self.y = np.cos (2* np.pi* self.x) self. fig = Figure () self. axes = self. fig. add_subplot (111) self. axes. plot ( self.x, self.y) initialise the FigureCanvas (renders the Matplotlib Figure in a QtWidget): FigureCanvas. init (self, self. fig ) the plot canvas is instantiated in the main() function: def main (): app = QtGui. QApplication ( sys. argv ) mpl = Qt4MplCanvas () mpl. show () sys. exit ( app. exec_ ())

48 PyQt and PySide: Matplotlib import sys import numpy as np from PySide import QtCore, QtGui from matplotlib. figure import Figure from matplotlib. backends. backend_qt4agg \ import FigureCanvasQTAgg as FigureCanvas class Qt4MplCanvas ( FigureCanvas ): def init ( self ): self.x = np. arange (0.0, 3.0, 0.01) self.y = np.cos (2* np.pi* self.x) self. fig = Figure () self. axes = self. fig. add_subplot (111) self. axes. plot ( self.x, self.y) FigureCanvas. init (self, self. fig ) def main (): app = QtGui. QApplication ( sys. argv ) mpl = Qt4MplCanvas () mpl. show () sys. exit ( app. exec_ ()) if name == main : main ()

49 PyQt and PySide: Matplotlib

50 PyQt and PySide: Designer Qt tool for designing and building graphical user interfaces design widgets, dialogs or complete main windows using on-screen forms and a simple drag-and-drop interface Qt Designer uses XML.ui files to store designs and does not generate any code itself Qt s uic utility generates the C++ code that creates the user interface Qt s QUiLoader class allows an application to load a.ui file and create the corresponding user interface dynamically PySide and PyQt include the uic Python module like QUiLoader it can load.ui files to create a user interface dynamically like the uic utility it can also generate the Python code that will create the user interface the pyuic4 utility is a command line interface to the uic module

51 PyQt and PySide: Designer

52 PyQt and PySide: Designer

53 PyQt and PySide: Designer

54 PyQt and PySide: Designer

55 PyQt and PySide: Designer

56 PyQt and PySide: Designer

57 PyQt and PySide: Designer we just created a graphical user interface! (example.ui) generate Python code with pyuic4: pyuic4 example. ui -o example. py the code is contained in a single class Ui_LeWidget that is derived from the Python object he name of the class is the name of the toplevel object set in Designer with Ui_ prepended the class contains a method called setupui() this takes a single argument which is the widget in which the user interface is created (typically QWidget, QDialog or QMainWindow) window = QWidget () ui = Ui_ LeWidget () ui. setupui ( window ) the generated code can then be used in a number of ways

58 PyQt and PySide: Designer create a simple application to create the dialog: import sys from PyQt4. QtGui import QApplication, QWidget from example import Ui_ LeWidget app = QApplication ( sys. argv ) window = QWidget () ui = Ui_ LeWidget () ui. setupui ( window ) window. show () sys. exit ( app. exec_ ())

59 PyQt and PySide: Designer single inheritance approach: subclass QWidget and set up the user interface in the init () method import sys from PyQt4. QtGui import QApplication, QWidget, QMessageBox from PyQt4 import QtCore from example import Ui_LeWidget class LeWidget ( QWidget ): def init ( self ): QWidget. init ( self ) self. ui = Ui_LeWidget () self.ui. setupui ( self ) app = QtCore. QCoreApplication. instance () self.ui. qtbtn. clicked. connect ( app. quit ) self.ui. okbtn. clicked. connect ( self. showmessage ) self. show () def showmessage ( self ): text = self.ui. inputle. text () QMessageBox. information ( self, Information, text )

60 PyQt and PySide: Designer multiple inheritance approach: subclass QWidget and Ui_LeWidget import sys from PyQt4. QtGui import QApplication, QWidget, QMessageBox from PyQt4 import QtCore from example import Ui_LeWidget class LeWidget ( QWidget, Ui_LeWidget ): def init ( self ): QWidget. init ( self ) self. setupui ( self ) app = QtCore. QCoreApplication. instance () self. qtbtn. clicked. connect ( app. quit ) self. okbtn. clicked. connect ( self. showmessage ) self. show () def showmessage ( self ): text = self. inputle. text () QMessageBox. information ( self, Information, text )

61 PyQt and PySide this just gives you a taste of how PySide and PyQt work there are quite a few other important basic topics: widgets: combo box, check box, slider, progress bar, tables,... menus, toolbars, statusbar signals & slots: emit signals yourself event models catch events and ask to ignore or accept them ( Do you really want to quit? ) dialogs: input, select file, print drag n drop, using the clipboard images, graphics, drawing custom widgets the model, view, controller paradigm the aforementioned tutorial is a good place to start:

62 Parallel Programming Python includes a multithreading and a multiprocessing package multithreading is seriously limited by the Global Interpreter Lock, which allows only one thread to be interacting with the interpreter at a time this restricts Python programs to run on a single processor regardless of how many CPU cores you have and how many threads you create multiprocessing allows spawning subprocesses which may run on different cores but are completely independent entities communication is only possible by message passing which makes parallelisation an effort that is probably not justified by the gain however, you can compile NumPy and SciPy with threaded libraries like ATLAS or MKL use Cython s prange for very simple parallelisation of loops via OpenMP use Parallel Python (job based parallelisation) use mpi4py (message passing, standard in HPC with C/C++/Fortran)

63 Parallel Python (PP) the PP module provides a mechanism for parallel execution of python code on systems with multiple cores and clusters connected via network simple to implement job-based parallelisation technique internally PP uses processes and Inter Process Communication (IPC) to organise parallel computations all the details and complexity are hidden from you and your application, it just submits jobs and retrieves their results very simple way to write parallel Python applications cross-platform portability (Linux, MacOSX, Windows), interoperability, dynamic load-balancing software written with PP works in parallel also on many computers connected via local network or internet even if the run different operating systems (it s pure Python!) Reference:

64 Parallel Python (PP) import the pp module: import pp start pp execution server with the number of workers set to the number of processors in the system: job_ server = pp. Server () submit all the tasks for parallel execution: f1 = job_ server. submit ( func1, args1, depfuncs1, modules1 ) f2 = job_ server. submit ( func1, args2, depfuncs1, modules1 ) f3 = job_ server. submit ( func2, args3, depfuncs2, modules2 )... retrieve the results as needed: r1 = f1 () r2 = f2 () r3 = f3 ()...

65 Parallel Python (PP) import import math pp def sum_primes ( nstart, nfinish ): sum = 0 for n in xrange ( nstart, nfinish +1): if isprime ( n): # checks if n is a prime sum += n return sum nprimes = job_server = pp. Server () ncpus = job_server. get_ncpus () np_cpu, np_add = divmod ( nprimes, ncpus ) ranges = [ (i* np_cpu +1, (i +1)* np_cpu ) for i in range (0, ncpus )] ranges [ ncpus -1] = ( ranges [ ncpus -1][0], ranges [ ncpus -1][1]+ np_add ) sum = 0 jobs = [( job_server. submit ( sum_primes, input, ( isprime,), (" math ",))) for input in ranges ] for job in jobs : sum += job ()

66 Parallel Python (PP) the task object, returned by a submit call, has an argument finished which indicates the status of the task and can be used to check if it has been completed: task = job_server. submit (f1, (a,b,c))... if task. finished : print (" The task is done!") else print (" Still working on it... ") you can perform an action at the time of completion of each individual task by setting the callback argument of the submit method: sum = 0 def add_sum (n): sum += n... task = job_server. submit ( sum_primes, ( nstart, nend ), callback = add_sum )

67 mpi4py MPI for Python provides full-featured bindings of the Message Passing Interface standard for the Python programming language point-to-point (sends, receives) and collective (broadcasts, scatters, gathers) communications of any picklable Python object (via the pickle module) as well as buffer-providing objects (e.g. NumPy arrays) (pickling: conversion of a Python object hierarchy into a byte stream) provides an object oriented interface which closely follows the MPI-2 C++ bindings and works with most of the MPI implementations any user of the standard C/C++ MPI bindings should be able to use this module without the need of learning a new interface Reference:

68 mpi4py allows any Python program to exploit multiple processors allows wrapping of C/C++ and Fortran code that uses MPI with Cython, SWIG and f2py you can run almost any MPI based C/C++/Fortran code from Python integration with IPython enables MPI applications to be used interactively Cython has bindings for mpi4py (mpi4py itself is written in Cython) from mpi4py import MPI from mpi4py cimport MPI

69 mpi4py: Hello World simple example: helloworld.py from mpi4py import MPI comm = MPI. COMM_ WORLD rank = comm. Get_rank () size = comm. Get_size () print (" Hello, World! I am process %d of %d." % (rank, size )) execute with mpiexec: > openmpiexec -n 4 python helloworld. py Hello, World! I am process 2 of 4. Hello, World! I am process 3 of 4. Hello, World! I am process 0 of 4. Hello, World! I am process 1 of 4.

70 mpi4py: Point-to-Point Communication of Python Objects point-to-point: transmission of data between a pair of processes send() and recv() communicate typed data (Python objects) ptp pyobj.py from mpi4py import MPI comm = MPI. COMM_ WORLD rank = comm. Get_rank () if rank == 0: data = { a : 7, b : 3.14} comm. send (data, dest =1, tag =11) print ("I am process %d. Sent data :" % ( rank ) ) print ( data ) elif rank == 1: data = comm. recv ( source =0, tag =11) print (" I am process % d. Recieved data :" % ( rank ) ) print ( data ) else : print ("I am process %d. I am bored." % ( rank ) )

71 mpi4py: Point-to-Point Communication of Python Objects the type information enables the conversion of data representation from one architecture to another allows for non-contiguous data layouts and user-defined datatypes the tag information allows selectivity of messages at the receiving end output: > openmpiexec -n 4 python ptp_ pyobj. py I am process 0 of 4. Sent data : { a : 7, b : 3.14} I am process 1 of 4. Recieved data : { a : 7, b : 3.14} I am process 2 of 4. I am bored. I am process 3 of 4. I am bored.

72 mpi4py: Point-to-Point Communication of NumPy Arrays Send() and Recv() communicate memory buffers (e.g. NumPy arrays) pass explicit MPI datatypes: from mpi4py import MPI import numpy as np comm = MPI. COMM_ WORLD rank = comm. Get_rank () if rank == 0: data = np. arange (100, dtype =np. float64 ) comm. Send ([ data, MPI. DOUBLE ], dest =1, tag =77) elif rank == 1: data = np. arange (100, dtype =np. float64 ) comm. Recv ([ data, MPI. DOUBLE ], source =0, tag =77)

73 mpi4py: Point-to-Point Communication of NumPy Arrays Send() and Recv() communicate memory buffers (e.g. NumPy arrays) automatic MPI datatype discovery: from mpi4py import MPI import numpy as np comm = MPI. COMM_ WORLD rank = comm. Get_rank () if rank == 0: data = np. arange (100, dtype =np. float64 ) comm. Send (data, dest =1, tag =77) elif rank == 1: data = np. empty (100, dtype =np. float64 ) comm. Recv (data, source =0, tag =77)

74 mpi4py: Blocking and Nonblocking Communications send() and recv() as well as Send() and Recv() are blocking functions block the caller until the data buffers involved in the communication can be safely reused by the application program often, performance can be significantly increased by overlapping communication and computations nonblocking send and receive functions always come in two parts posting functions: begin the requested operation test-for-completion functions: discover whether the requested operation has completed Isend() and Irecv() initiate a send and receive operation, respectively, and return a Request instance, uniquely identifying the started operation completion can be managed using the Test(), Wait(), and Cancel() methods of the Request class

75 mpi4py: Collective Communications simultaneous transmission of data between multiple processes commonly used collective communication operations: global communication functions: broadcast data from one node to all nodes scatter data from one node to all nodes gather data from all nodes to one node barrier synchronisation across all nodes global reduction operations such as sum, maximum, minimum, etc. collective functions communicate typed data or memory buffers bcast(), scatter(), gather(), allgather() and alltoall() communicate generic Python object Bcast(), Scatter(), Gather(), Allgather() and Alltoall() communicate memory buffers collective functions come in blocking versions only collective messages are not paired with an associated tag selectivity of messages is implied in the calling order

76 mpi4py: Broadcasting broadcast a Python list from one process to all processes: from mpi4py import MPI comm = MPI. COMM_ WORLD rank = comm. Get_rank () if rank == 0: data = [(i +1)**2 for i in range ( size )] else : data = None data = comm. bcast (data, root =0) each process receives the complete list

77 mpi4py: Scattering scatter a Python list from one process to all processes: from mpi4py import MPI comm = MPI. COMM_ WORLD size = comm. Get_size () rank = comm. Get_rank () if rank == 0: data = [(i +1)**2 for i in range ( size )] else : data = None data = comm. scatter (data, root =0) assert data == ( rank +1)**2 each process receives one array element (data[rank+1])

78 mpi4py: Gathering gather together values from all processes to one process: from mpi4py import MPI comm = MPI. COMM_ WORLD size = comm. Get_size () rank = comm. Get_rank () data = ( rank +1)**2 data = comm. gather (data, root =0) if rank == 0: for i in range ( size ): assert data [i] == (i +1)**2 else : assert data is None the root process gathers a list whose entries are the data values of all processes allgather(sendbuf, recvbuf) gathers data from all processes and distributes it to all other processes

79 mpi4py: Cython mpi4py works in Cython just the same way it does in Python: hello world.pyx def hello ( comm ): rank = comm. Get_rank () size = comm. Get_size () print (" Hello, World! I am process %d of %d." % (rank, size )) call from Python: hello cython.py from mpi4py import MPI from helloworld_cython import hello comm = MPI. COMM_WORLD hello ( comm ) execute with mpiexec: > openmpiexec -n 4 python hello_cython. py Hello, World! I am process 0 of 4. Hello, World! I am process 1 of 4. Hello, World! I am process 2 of 4. Hello, World! I am process 3 of 4.

80 mpi4py: Cython do some array calculations in Cython: calc cython.py import numpy as np from mpi4py import MPI from calc import square comm = MPI. COMM_WORLD rank = comm. Get_rank () size = comm. Get_size () if rank == 0: data = np. arange (25* size ) data = np. split (data, size ) else : data = None data = comm. scatter (data, root =0) squ = square ( data ) squ = comm. gather (squ, root =0) if rank == 0: squ = np. concatenate ( squ ) print ( squ )

81 mpi4py: Cython within Cython you can of course use the fast cdef functions: calc.pyx cimport numpy as np cdef fast_square (np. ndarray [long, ndim =1] x): return x* x def square (np. ndarray [long, ndim =1] x): return fast_ square ( x) execute with mpiexec: > openmpiexec -n 4 python calc_cython. py [ ]

82 mpi4py: Wrapping with SWIG MPI hello world in C: hello.h # ifndef _ HELLO_ H_ # define _ HELLO_ H_ # include <mpi.h> void sayhello ( MPI_ Comm ); # endif // _HELLO_H_ hello.c # include " hello.h" void sayhello ( MPI_ Comm comm ) { int size, rank ; MPI_Comm_size (comm, & size ); MPI_Comm_rank (comm, & rank ); printf (" Hello, World! I am process %d of %d.\n", rank, size ); }

83 mpi4py: Wrapping with SWIG SWIG interface: hello.i % module helloworld %{ # include " hello.h" }% % include mpi4py / mpi4py.i % mpi4py_typemap (Comm, MPI_Comm ); % include <hello.h>; create wrapper and compile library: > swig - I$( MPI4PY_SWIG_DIR ) - python hello. i > openmpicc python - config -- cflags - I$( MPI4PY_INCLUDE_DIR ) \ -c hello_wrap. c hello. c > openmpicc python - config -- ldflags - shared -o _hello_swig. so \ hello_wrap.o hello.o

84 mpi4py: Wrapping with SWIG call from Python: hello swig.py >>> from mpi4py import MPI >>> import hello >>> hello. sayhello ( MPI. COMM_WORLD ) the SWIG typemap takes care of translating the mpi4py communicator into a C object, so we can pass it directly execute with mpiexec: > openmpiexec -n 4 python hello_ swig. py Hello, World! I am process 0 of 4. Hello, World! I am process 2 of 4. Hello, World! I am process 3 of 4. Hello, World! I am process 1 of 4.

85 mpi4py: Wrapping with f2py MPI hello world in Fortran: helloworld.f90 subroutine sayhello ( comm ) use mpi implicit none integer :: comm, rank, size, ierr call MPI_Comm_size (comm, size, ierr ) call MPI_Comm_rank (comm, rank, ierr ) print *, Hello, World! I am process,rank, of,size,. end subroutine sayhello compile with f2py (almost) as usual: > f2py -- fcompiler = gnu95 -- f90exec = openmpif90 \ -m helloworld -c helloworld. f90 you only have to specify the MPI Fortran compiler with --f90exec

86 mpi4py: Wrapping with f2py call from Python: hello f2py.py from mpi4py import MPI import helloworld comm = MPI. COMM_ WORLD helloworld. sayhello ( comm. py2f ()) the py2f() function provides a Fortran communicator execute with mpiexec: > openmpiexec -n 4 python hello_ f2py. py Hello, World! I am process 2 of 4. Hello, World! I am process 0 of 4. Hello, World! I am process 3 of 4. Hello, World! I am process 1 of 4.

87 Symbolic Computing with Sage Sage is an open-source mathematics software system based on Python combines nearly 100 packages under a unified interface includes a huge range of mathematics, including basic algebra, calculus, elementary to very advanced number theory, cryptography, numerical computation, commutative algebra, group theory, combinatorics, graph theory, exact linear algebra and much more the user interface is a notebook in a web browser or the command line it s a viable, free alternative to Maple, Mathematica, and MATLAB References: Sage: Beginner s Guide (2011) Craig Finch

Introduction to Python

Introduction to Python Introduction to Python Part 3: Advanced Topics Michael Kraus (michael.kraus@ipp.mpg.de) Max-Planck-Institut für Plasmaphysik, Garching 1. December 2011 Advanced Topics calling and embedding C and Fortran

More information

Graphical User Interfaces

Graphical User Interfaces Chapter 14 Graphical User Interfaces So far, we have developed programs that interact with the user through the command line, where the user has to call a Python program by typing its name and adding the

More information

About 1. Chapter 1: Getting started with pyqt5 2. Remarks 2. Examples 2. Installation or Setup 2. Hello World Example 6. Adding an application icon 8

About 1. Chapter 1: Getting started with pyqt5 2. Remarks 2. Examples 2. Installation or Setup 2. Hello World Example 6. Adding an application icon 8 pyqt5 #pyqt5 Table of Contents About 1 Chapter 1: Getting started with pyqt5 2 Remarks 2 Examples 2 Installation or Setup 2 Hello World Example 6 Adding an application icon 8 Showing a tooltip 10 Package

More information

Object-Oriented Programming

Object-Oriented Programming iuliana@cs.ubbcluj.ro Babes-Bolyai University 2018 1 / 33 Overview 1 2 3 4 5 6 2 / 33 I Qt is a cross-platform application and UI framework in C++. Using Qt, one can write GUI applications once and deploy

More information

Practical Introduction to Message-Passing Interface (MPI)

Practical Introduction to Message-Passing Interface (MPI) 1 Outline of the workshop 2 Practical Introduction to Message-Passing Interface (MPI) Bart Oldeman, Calcul Québec McGill HPC Bart.Oldeman@mcgill.ca Theoretical / practical introduction Parallelizing your

More information

LECTURE 17. GUI Programming

LECTURE 17. GUI Programming LECTURE 17 GUI Programming GUI PROGRAMMING IN PYTHON There are a number of platform-independent GUI toolkits available including: Tkinter wrapper around Tcl/Tk. PyQt Python bindings for the Qt C++ framework.

More information

mpi4py HPC Python R. Todd Evans January 23, 2015

mpi4py HPC Python R. Todd Evans January 23, 2015 mpi4py HPC Python R. Todd Evans rtevans@tacc.utexas.edu January 23, 2015 What is MPI Message Passing Interface Most useful on distributed memory machines Many implementations, interfaces in C/C++/Fortran

More information

Scientific Computing with Python and CUDA

Scientific Computing with Python and CUDA Scientific Computing with Python and CUDA Stefan Reiterer High Performance Computing Seminar, January 17 2011 Stefan Reiterer () Scientific Computing with Python and CUDA HPC Seminar 1 / 55 Inhalt 1 A

More information

Python Scripting for Computational Science

Python Scripting for Computational Science Hans Petter Langtangen Python Scripting for Computational Science Third Edition With 62 Figures 43 Springer Table of Contents 1 Introduction... 1 1.1 Scripting versus Traditional Programming... 1 1.1.1

More information

Python Scripting for Computational Science

Python Scripting for Computational Science Hans Petter Langtangen Python Scripting for Computational Science Third Edition With 62 Figures Sprin ger Table of Contents 1 Introduction 1 1.1 Scripting versus Traditional Programming 1 1.1.1 Why Scripting

More information

LECTURE 18 GUI Programming Part 2

LECTURE 18 GUI Programming Part 2 LECTURE 18 GUI Programming Part 2 BASIC PYQT Last lecture, we created a basic PyQt4 application which had a few buttons and a menu bar. import sys from PyQt4 import QtGui, QtCore class Example(QtGui.QMainWindow):

More information

Python GUIs. $ conda install pyqt

Python GUIs. $ conda install pyqt PyQT GUIs 1 / 18 Python GUIs Python wasn t originally desined for GUI programming In the interest of "including batteries" the tkinter was included in the Python standard library tkinter is a Python wrapper

More information

MPI: the Message Passing Interface

MPI: the Message Passing Interface 15 Parallel Programming with MPI Lab Objective: In the world of parallel computing, MPI is the most widespread and standardized message passing library. As such, it is used in the majority of parallel

More information

ECE 574 Cluster Computing Lecture 13

ECE 574 Cluster Computing Lecture 13 ECE 574 Cluster Computing Lecture 13 Vince Weaver http://web.eece.maine.edu/~vweaver vincent.weaver@maine.edu 21 March 2017 Announcements HW#5 Finally Graded Had right idea, but often result not an *exact*

More information

Python GUI programming with PySide. Speaker: BigLittle Date: 2013/03/04

Python GUI programming with PySide. Speaker: BigLittle Date: 2013/03/04 Python GUI programming with PySide Speaker: BigLittle Date: 2013/03/04 CLI vs. GUI CLI (Command Line Interface) Take less resources. User have much more control of their system. Only need to execute few

More information

Introduction to MPI, the Message Passing Library

Introduction to MPI, the Message Passing Library Chapter 3, p. 1/57 Basics of Basic Messages -To-? Introduction to, the Message Passing Library School of Engineering Sciences Computations for Large-Scale Problems I Chapter 3, p. 2/57 Outline Basics of

More information

Advanced Message-Passing Interface (MPI)

Advanced Message-Passing Interface (MPI) Outline of the workshop 2 Advanced Message-Passing Interface (MPI) Bart Oldeman, Calcul Québec McGill HPC Bart.Oldeman@mcgill.ca Morning: Advanced MPI Revision More on Collectives More on Point-to-Point

More information

Message Passing Interface: Basic Course

Message Passing Interface: Basic Course Overview of DM- HPC2N, UmeåUniversity, 901 87, Sweden. April 23, 2015 Table of contents Overview of DM- 1 Overview of DM- Parallelism Importance Partitioning Data Distributed Memory Working on Abisko 2

More information

Introduction to the Message Passing Interface (MPI)

Introduction to the Message Passing Interface (MPI) Introduction to the Message Passing Interface (MPI) CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018

More information

Exercises Lecture 3 Layouts and widgets

Exercises Lecture 3 Layouts and widgets Exercises Lecture 3 Layouts and widgets Aim: Duration: This exercise will help you explore and understand Qt's widgets and the layout approach to designing user interfaces. 2h The enclosed Qt Materials

More information

Introduction to MPI. May 20, Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign

Introduction to MPI. May 20, Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign Introduction to MPI May 20, 2013 Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign Top500.org PERFORMANCE DEVELOPMENT 1 Eflop/s 162 Pflop/s PROJECTED 100 Pflop/s

More information

Mixed language programming with NumPy arrays

Mixed language programming with NumPy arrays Mixed language programming with NumPy arrays Simon Funke 1,2 Ola Skavhaug 3 Joakim Sundnes 1,2 Hans Petter Langtangen 1,2 Center for Biomedical Computing, Simula Research Laboratory 1 Dept. of Informatics,

More information

Distributed Memory Systems: Part IV

Distributed Memory Systems: Part IV Chapter 5 Distributed Memory Systems: Part IV Max Planck Institute Magdeburg Jens Saak, Scientific Computing II 293/342 The Message Passing Interface is a standard for creation of parallel programs using

More information

Advanced and Parallel Python

Advanced and Parallel Python Advanced and Parallel Python December 1st, 2016 http://tinyurl.com/cq-advanced-python-20161201 By: Bart Oldeman and Pier-Luc St-Onge 1 Financial Partners 2 Setup for the workshop 1. Get a user ID and password

More information

High Performance Computing with Python

High Performance Computing with Python High Performance Computing with Python Pawel Pomorski SHARCNET University of Waterloo ppomorsk@sharcnet.ca April 29,2015 Outline Speeding up Python code with NumPy Speeding up Python code with Cython Using

More information

Message-Passing and MPI Programming

Message-Passing and MPI Programming Message-Passing and MPI Programming 2.1 Transfer Procedures Datatypes and Collectives N.M. Maclaren Computing Service nmm1@cam.ac.uk ext. 34761 July 2010 These are the procedures that actually transfer

More information

An Introduction to F2Py

An Introduction to F2Py An Introduction to Jules Kouatchou, Hamid Oloso and Mike Rilee Jules.Kouatchou@nasa.gov, Amidu.O.Oloso@nasa.gov and Michael.Rilee@nasa.gov Goddard Space Flight Center Software System Support Office Code

More information

An Introduction to Parallel Programming using MPI

An Introduction to Parallel Programming using MPI Lab 13 An Introduction to Parallel Programming using MPI Lab Objective: Learn the basics of parallel computing on distributed memory machines using MPI for Python Why Parallel Computing? Over the past

More information

Implementation of Parallelization

Implementation of Parallelization Implementation of Parallelization OpenMP, PThreads and MPI Jascha Schewtschenko Institute of Cosmology and Gravitation, University of Portsmouth May 9, 2018 JAS (ICG, Portsmouth) Implementation of Parallelization

More information

MPI Casestudy: Parallel Image Processing

MPI Casestudy: Parallel Image Processing MPI Casestudy: Parallel Image Processing David Henty 1 Introduction The aim of this exercise is to write a complete MPI parallel program that does a very basic form of image processing. We will start by

More information

Running Cython. overview hello world with Cython. experimental setup adding type declarations cdef functions & calling external functions

Running Cython. overview hello world with Cython. experimental setup adding type declarations cdef functions & calling external functions Running Cython 1 Getting Started with Cython overview hello world with Cython 2 Numerical Integration experimental setup adding type declarations cdef functions & calling external functions 3 Using Cython

More information

PySide. overview. Marc Poinot (ONERA/DSNA)

PySide. overview. Marc Poinot (ONERA/DSNA) PySide overview Marc Poinot (ONERA/DSNA) Outline Quite short but practical overview Qt Toolkit overview Model/View PySide pyqt4 vs PySide Designer & Cython Widget bindings Class reuse ONERA/PySide-2/8

More information

Interfacing With Other Programming Languages Using Cython

Interfacing With Other Programming Languages Using Cython Lab 19 Interfacing With Other Programming Languages Using Cython Lab Objective: Learn to interface with object files using Cython. This lab should be worked through on a machine that has already been configured

More information

CS4961 Parallel Programming. Lecture 16: Introduction to Message Passing 11/3/11. Administrative. Mary Hall November 3, 2011.

CS4961 Parallel Programming. Lecture 16: Introduction to Message Passing 11/3/11. Administrative. Mary Hall November 3, 2011. CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Administrative Next programming assignment due on Monday, Nov. 7 at midnight Need to define teams and have initial conversation with

More information

Our new HPC-Cluster An overview

Our new HPC-Cluster An overview Our new HPC-Cluster An overview Christian Hagen Universität Regensburg Regensburg, 15.05.2009 Outline 1 Layout 2 Hardware 3 Software 4 Getting an account 5 Compiling 6 Queueing system 7 Parallelization

More information

High Performance Computing with Python

High Performance Computing with Python High Performance Computing with Python Pawel Pomorski SHARCNET University of Waterloo ppomorsk@sharcnet.ca March 15,2017 Outline Speeding up Python code with NumPy Speeding up Python code with Cython Speeding

More information

Speeding up Python. Antonio Gómez-Iglesias April 17th, 2015

Speeding up Python. Antonio Gómez-Iglesias April 17th, 2015 Speeding up Python Antonio Gómez-Iglesias agomez@tacc.utexas.edu April 17th, 2015 Why Python is nice, easy, development is fast However, Python is slow The bottlenecks can be rewritten: SWIG Boost.Python

More information

http://tinyurl.com/cq-advanced-python-20151029 1 2 ##: ********** ## csuser## @[S## ********** guillimin.hpc.mcgill.ca class## ********** qsub interactive.pbs 3 cp -a /software/workshop/cq-formation-advanced-python

More information

Session 12: Introduction to MPI (4PY) October 9 th 2018, Alexander Peyser (Lena Oden)

Session 12: Introduction to MPI (4PY) October 9 th 2018, Alexander Peyser (Lena Oden) Session 12: Introduction to MPI (4PY) October 9 th 2018, Alexander Peyser (Lena Oden) Overview Introduction Basic concepts mpirun Hello world Wrapping numpy arrays Common Pitfalls Introduction MPI: de

More information

Session 12: Introduction to MPI (4PY) October 10 th 2017, Lena Oden

Session 12: Introduction to MPI (4PY) October 10 th 2017, Lena Oden Session 12: Introduction to MPI (4PY) October 10 th 2017, Lena Oden Overview Introduction Basic concepts mpirun Hello world Wrapping numpy arrays Common Pittfals Introduction MPI de facto standard for

More information

OpenACC Course. Office Hour #2 Q&A

OpenACC Course. Office Hour #2 Q&A OpenACC Course Office Hour #2 Q&A Q1: How many threads does each GPU core have? A: GPU cores execute arithmetic instructions. Each core can execute one single precision floating point instruction per cycle

More information

Message Passing Interface

Message Passing Interface MPSoC Architectures MPI Alberto Bosio, Associate Professor UM Microelectronic Departement bosio@lirmm.fr Message Passing Interface API for distributed-memory programming parallel code that runs across

More information

Parallel Programming. Libraries and Implementations

Parallel Programming. Libraries and Implementations Parallel Programming Libraries and Implementations Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

Collective Communication

Collective Communication Lab 14 Collective Communication Lab Objective: Learn how to use collective communication to increase the efficiency of parallel programs In the lab on the Trapezoidal Rule [Lab??], we worked to increase

More information

Introduction to Scientific Computing with Python, part two.

Introduction to Scientific Computing with Python, part two. Introduction to Scientific Computing with Python, part two. M. Emmett Department of Mathematics University of North Carolina at Chapel Hill June 20 2012 The Zen of Python zen of python... fire up python

More information

Part VI. Scientific Computing in Python. Alfredo Parra : Scripting with Python Compact Max-PlanckMarch 6-10,

Part VI. Scientific Computing in Python. Alfredo Parra : Scripting with Python Compact Max-PlanckMarch 6-10, Part VI Scientific Computing in Python Compact Course @ Max-PlanckMarch 6-10, 2017 63 Doing maths in Python Standard sequence types (list, tuple,... ) Can be used as arrays Can contain different types

More information

Astronomical Data Analysis with Python

Astronomical Data Analysis with Python Astronomical Data Analysis with Python Lecture 8 Yogesh Wadadekar NCRA-TIFR July August 2010 Yogesh Wadadekar (NCRA-TIFR) Topical course 1 / 27 Slides available at: http://www.ncra.tifr.res.in/ yogesh/python_course_2010/

More information

Message-Passing and MPI Programming

Message-Passing and MPI Programming Message-Passing and MPI Programming More on Collectives N.M. Maclaren Computing Service nmm1@cam.ac.uk ext. 34761 July 2010 5.1 Introduction There are two important facilities we have not covered yet;

More information

Qt + Maemo development

Qt + Maemo development ES3 Lecture 11 Qt + Maemo development Maemo Nokia's Linux based platform Almost entirely open source Nokia N770, N800, N810, N900 only models Only N900 has 3G/phone capability N900 has relatively fast

More information

CS 4300 Computer Graphics

CS 4300 Computer Graphics CS 4300 Computer Graphics Prof. Harriet Fell Fall 2011 Lecture 8 September 22, 2011 GUIs GUIs in modern operating systems cross-platform GUI frameworks common GUI widgets event-driven programming Model-View-Controller

More information

Programming Scalable Systems with MPI. Clemens Grelck, University of Amsterdam

Programming Scalable Systems with MPI. Clemens Grelck, University of Amsterdam Clemens Grelck University of Amsterdam UvA / SurfSARA High Performance Computing and Big Data Course June 2014 Parallel Programming with Compiler Directives: OpenMP Message Passing Gentle Introduction

More information

Oh my. Maya is Qt! Kristine Middlemiss, Autodesk Developer Consultant, Autodesk Developer Network

Oh my. Maya is Qt! Kristine Middlemiss, Autodesk Developer Consultant, Autodesk Developer Network Oh my. Maya is Qt! Kristine Middlemiss, Autodesk Developer Consultant, Autodesk Developer Network 1 2 Biography Topics» Introducing Qt» How Qt fits into Maya» Ways to work with Qt»Qt Designer with Maya

More information

Slides prepared by : Farzana Rahman 1

Slides prepared by : Farzana Rahman 1 Introduction to MPI 1 Background on MPI MPI - Message Passing Interface Library standard defined by a committee of vendors, implementers, and parallel programmers Used to create parallel programs based

More information

Masterpraktikum - Scientific Computing, High Performance Computing

Masterpraktikum - Scientific Computing, High Performance Computing Masterpraktikum - Scientific Computing, High Performance Computing Message Passing Interface (MPI) and CG-method Michael Bader Alexander Heinecke Technische Universität München, Germany Outline MPI Hello

More information

Introduction to parallel computing with MPI

Introduction to parallel computing with MPI Introduction to parallel computing with MPI Sergiy Bubin Department of Physics Nazarbayev University Distributed Memory Environment image credit: LLNL Hybrid Memory Environment Most modern clusters and

More information

Report Designer Report Types Table Report Multi-Column Report Label Report Parameterized Report Cross-Tab Report Drill-Down Report Chart with Static

Report Designer Report Types Table Report Multi-Column Report Label Report Parameterized Report Cross-Tab Report Drill-Down Report Chart with Static Table of Contents Report Designer Report Types Table Report Multi-Column Report Label Report Parameterized Report Cross-Tab Report Drill-Down Report Chart with Static Series Chart with Dynamic Series Master-Detail

More information

Linkage of XcalableMP and Python languages for high productivity on HPC cluster system

Linkage of XcalableMP and Python languages for high productivity on HPC cluster system Linkage of XcalableMP and Python languages for high productivity on HPC cluster system Masahiro Nakao (RIKEN Center for Computational Science) 6th XMP Workshop@University of Tsukuba, Japan Background XcalableMP

More information

GUI in C++ PV264 Advanced Programming in C++ Nikola Beneš Jan Mrázek Vladimír Štill. Faculty of Informatics, Masaryk University.

GUI in C++ PV264 Advanced Programming in C++ Nikola Beneš Jan Mrázek Vladimír Štill. Faculty of Informatics, Masaryk University. GUI in C++ PV264 Advanced Programming in C++ Nikola Beneš Jan Mrázek Vladimír Štill Faculty of Informatics, Masaryk University Spring 2017 PV264: GUI in C++ Spring 2017 1 / 23 Organisation Lectures this

More information

Programming with MPI

Programming with MPI Programming with MPI p. 1/?? Programming with MPI Miscellaneous Guidelines Nick Maclaren Computing Service nmm1@cam.ac.uk, ext. 34761 March 2010 Programming with MPI p. 2/?? Summary This is a miscellaneous

More information

CS201 - Introduction to Programming Glossary By

CS201 - Introduction to Programming Glossary By CS201 - Introduction to Programming Glossary By #include : The #include directive instructs the preprocessor to read and include a file into a source code file. The file name is typically enclosed with

More information

PPCES 2016: MPI Lab March 2016 Hristo Iliev, Portions thanks to: Christian Iwainsky, Sandra Wienke

PPCES 2016: MPI Lab March 2016 Hristo Iliev, Portions thanks to: Christian Iwainsky, Sandra Wienke PPCES 2016: MPI Lab 16 17 March 2016 Hristo Iliev, iliev@itc.rwth-aachen.de Portions thanks to: Christian Iwainsky, Sandra Wienke Synopsis The purpose of this hands-on lab is to make you familiar with

More information

MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016

MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016 MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016 Message passing vs. Shared memory Client Client Client Client send(msg) recv(msg) send(msg) recv(msg) MSG MSG MSG IPC Shared

More information

P2: Collaborations. CSE 335, Spring 2009

P2: Collaborations. CSE 335, Spring 2009 P2: Collaborations CSE 335, Spring 2009 Milestone #1 due by Thursday, March 19 at 11:59 p.m. Completed project due by Thursday, April 2 at 11:59 p.m. Objectives Develop an application with a graphical

More information

Parallel Programming

Parallel Programming Parallel Programming for Multicore and Cluster Systems von Thomas Rauber, Gudula Rünger 1. Auflage Parallel Programming Rauber / Rünger schnell und portofrei erhältlich bei beck-shop.de DIE FACHBUCHHANDLUNG

More information

15-440: Recitation 8

15-440: Recitation 8 15-440: Recitation 8 School of Computer Science Carnegie Mellon University, Qatar Fall 2013 Date: Oct 31, 2013 I- Intended Learning Outcome (ILO): The ILO of this recitation is: Apply parallel programs

More information

Introduction to MPI. Ricardo Fonseca. https://sites.google.com/view/rafonseca2017/

Introduction to MPI. Ricardo Fonseca. https://sites.google.com/view/rafonseca2017/ Introduction to MPI Ricardo Fonseca https://sites.google.com/view/rafonseca2017/ Outline Distributed Memory Programming (MPI) Message Passing Model Initializing and terminating programs Point to point

More information

Parallel Programming Libraries and implementations

Parallel Programming Libraries and implementations Parallel Programming Libraries and implementations Partners Funding Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License.

More information

Holland Computing Center Kickstart MPI Intro

Holland Computing Center Kickstart MPI Intro Holland Computing Center Kickstart 2016 MPI Intro Message Passing Interface (MPI) MPI is a specification for message passing library that is standardized by MPI Forum Multiple vendor-specific implementations:

More information

Agenda. MPI Application Example. Praktikum: Verteiltes Rechnen und Parallelprogrammierung Introduction to MPI. 1) Recap: MPI. 2) 2.

Agenda. MPI Application Example. Praktikum: Verteiltes Rechnen und Parallelprogrammierung Introduction to MPI. 1) Recap: MPI. 2) 2. Praktikum: Verteiltes Rechnen und Parallelprogrammierung Introduction to MPI Agenda 1) Recap: MPI 2) 2. Übungszettel 3) Projektpräferenzen? 4) Nächste Woche: 3. Übungszettel, Projektauswahl, Konzepte 5)

More information

Message-Passing and MPI Programming

Message-Passing and MPI Programming Message-Passing and MPI Programming 5.1 Introduction More on Datatypes and Collectives N.M. Maclaren nmm1@cam.ac.uk July 2010 There are a few important facilities we have not covered yet; they are less

More information

Working with IITJ HPC Environment

Working with IITJ HPC Environment Working with IITJ HPC Environment by Training Agenda for 23 Dec 2011 1. Understanding Directory structure of IITJ HPC 2. User vs root 3. What is bash_profile 4. How to install any source code in your user

More information

Short Notes of CS201

Short Notes of CS201 #includes: Short Notes of CS201 The #include directive instructs the preprocessor to read and include a file into a source code file. The file name is typically enclosed with < and > if the file is a system

More information

TxWin 5.xx Programming and User Guide

TxWin 5.xx Programming and User Guide TxWin 5.xx Programming and User Guide Jan van Wijk Brief programming and user guide for the open-source TxWin text UI library Presentation contents Interfacing, include files, LIBs The message event model

More information

A few words about MPI (Message Passing Interface) T. Edwald 10 June 2008

A few words about MPI (Message Passing Interface) T. Edwald 10 June 2008 A few words about MPI (Message Passing Interface) T. Edwald 10 June 2008 1 Overview Introduction and very short historical review MPI - as simple as it comes Communications Process Topologies (I have no

More information

MPI: Parallel Programming for Extreme Machines. Si Hammond, High Performance Systems Group

MPI: Parallel Programming for Extreme Machines. Si Hammond, High Performance Systems Group MPI: Parallel Programming for Extreme Machines Si Hammond, High Performance Systems Group Quick Introduction Si Hammond, (sdh@dcs.warwick.ac.uk) WPRF/PhD Research student, High Performance Systems Group,

More information

Chip Multiprocessors COMP Lecture 9 - OpenMP & MPI

Chip Multiprocessors COMP Lecture 9 - OpenMP & MPI Chip Multiprocessors COMP35112 Lecture 9 - OpenMP & MPI Graham Riley 14 February 2018 1 Today s Lecture Dividing work to be done in parallel between threads in Java (as you are doing in the labs) is rather

More information

Mixed language programming

Mixed language programming Mixed language programming Simon Funke 1,2 Ola Skavhaug 3 Joakim Sundnes 1,2 Hans Petter Langtangen 1,2 Center for Biomedical Computing, Simula Research Laboratory 1 Dept. of Informatics, University of

More information

MPI Collective communication

MPI Collective communication MPI Collective communication CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) MPI Collective communication Spring 2018 1 / 43 Outline 1 MPI Collective communication

More information

ECE 574 Cluster Computing Lecture 13

ECE 574 Cluster Computing Lecture 13 ECE 574 Cluster Computing Lecture 13 Vince Weaver http://www.eece.maine.edu/~vweaver vincent.weaver@maine.edu 15 October 2015 Announcements Homework #3 and #4 Grades out soon Homework #5 will be posted

More information

High Performance Computing Prof. Matthew Jacob Department of Computer Science and Automation Indian Institute of Science, Bangalore

High Performance Computing Prof. Matthew Jacob Department of Computer Science and Automation Indian Institute of Science, Bangalore High Performance Computing Prof. Matthew Jacob Department of Computer Science and Automation Indian Institute of Science, Bangalore Module No # 09 Lecture No # 40 This is lecture forty of the course on

More information

Day 15: Science Code in Python

Day 15: Science Code in Python Day 15: Science Code in Python 1 Turn In Homework 2 Homework Review 3 Science Code in Python? 4 Custom Code vs. Off-the-Shelf Trade-offs Costs (your time vs. your $$$) Your time (coding vs. learning) Control

More information

Topic Notes: Message Passing Interface (MPI)

Topic Notes: Message Passing Interface (MPI) Computer Science 400 Parallel Processing Siena College Fall 2008 Topic Notes: Message Passing Interface (MPI) The Message Passing Interface (MPI) was created by a standards committee in the early 1990

More information

Handling Parallelisation in OpenFOAM

Handling Parallelisation in OpenFOAM Handling Parallelisation in OpenFOAM Hrvoje Jasak hrvoje.jasak@fsb.hr Faculty of Mechanical Engineering and Naval Architecture University of Zagreb, Croatia Handling Parallelisation in OpenFOAM p. 1 Parallelisation

More information

HPC Workshop University of Kentucky May 9, 2007 May 10, 2007

HPC Workshop University of Kentucky May 9, 2007 May 10, 2007 HPC Workshop University of Kentucky May 9, 2007 May 10, 2007 Part 3 Parallel Programming Parallel Programming Concepts Amdahl s Law Parallel Programming Models Tools Compiler (Intel) Math Libraries (Intel)

More information

SERIOUS ABOUT SOFTWARE. Qt Core features. Timo Strömmer, May 26,

SERIOUS ABOUT SOFTWARE. Qt Core features. Timo Strömmer, May 26, SERIOUS ABOUT SOFTWARE Qt Core features Timo Strömmer, May 26, 2010 1 Contents C++ refresher Core features Object model Signals & slots Event loop Shared data Strings Containers Private implementation

More information

The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing

The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing NTNU, IMF January 16. 2017 1 Parallelism Decompose the execution into several tasks according to the work to be done: Function/Task

More information

High Performance Computing Course Notes Message Passing Programming I

High Performance Computing Course Notes Message Passing Programming I High Performance Computing Course Notes 2008-2009 2009 Message Passing Programming I Message Passing Programming Message Passing is the most widely used parallel programming model Message passing works

More information

CS Programming Languages: Python

CS Programming Languages: Python CS 3101-1 - Programming Languages: Python Lecture 5: Exceptions / Daniel Bauer (bauer@cs.columbia.edu) October 08 2014 Daniel Bauer CS3101-1 Python - 05 - Exceptions / 1/35 Contents Exceptions Daniel Bauer

More information

Introduction to MPI. Ekpe Okorafor. School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014

Introduction to MPI. Ekpe Okorafor. School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014 Introduction to MPI Ekpe Okorafor School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014 Topics Introduction MPI Model and Basic Calls MPI Communication Summary 2 Topics Introduction

More information

multiprocessing and mpi4py

multiprocessing and mpi4py multiprocessing and mpi4py 02-03 May 2012 ARPA PIEMONTE m.cestari@cineca.it Bibliography multiprocessing http://docs.python.org/library/multiprocessing.html http://www.doughellmann.com/pymotw/multiprocessi

More information

Practical Introduction to Message-Passing Interface (MPI)

Practical Introduction to Message-Passing Interface (MPI) 1 Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 By: Pier-Luc St-Onge Partners and Sponsors 2 Setup for the workshop 1. Get a user ID and password paper (provided in class):

More information

Programming with MPI

Programming with MPI Programming with MPI p. 1/?? Programming with MPI Composite Types and Language Standards Nick Maclaren Computing Service nmm1@cam.ac.uk, ext. 34761 March 2008 Programming with MPI p. 2/?? Composite Types

More information

On the performance of the Python programming language for serial and parallel scientific computations

On the performance of the Python programming language for serial and parallel scientific computations Scientific Programming 13 (2005) 31 56 31 IOS Press On the performance of the Python programming language for serial and parallel scientific computations Xing Cai a,b, Hans Petter Langtangen a,b and Halvard

More information

Speeding up Python using Cython

Speeding up Python using Cython Speeding up Python using Cython Rolf Boomgaarden Thiemo Gries Florian Letsch Universität Hamburg November 28th, 2013 What is Cython? Compiler, compiles Python-like code to C-code Code is still executed

More information

Distributed Memory Parallel Programming

Distributed Memory Parallel Programming COSC Big Data Analytics Parallel Programming using MPI Edgar Gabriel Spring 201 Distributed Memory Parallel Programming Vast majority of clusters are homogeneous Necessitated by the complexity of maintaining

More information

Collective Communication in MPI and Advanced Features

Collective Communication in MPI and Advanced Features Collective Communication in MPI and Advanced Features Pacheco s book. Chapter 3 T. Yang, CS240A. Part of slides from the text book, CS267 K. Yelick from UC Berkeley and B. Gropp, ANL Outline Collective

More information

3.Constructors and Destructors. Develop cpp program to implement constructor and destructor.

3.Constructors and Destructors. Develop cpp program to implement constructor and destructor. 3.Constructors and Destructors Develop cpp program to implement constructor and destructor. Constructors A constructor is a special member function whose task is to initialize the objects of its class.

More information

Visual Analyzer V2.1 User s Guide

Visual Analyzer V2.1 User s Guide Visual Analyzer V2.1 User s Guide Visual Analyzer V2.1 User s Guide Page 2 Preface Purpose of This Manual This manual explains how to use the Visual Analyzer. The Visual Analyzer operates under the following

More information

HPC Parallel Programing Multi-node Computation with MPI - I

HPC Parallel Programing Multi-node Computation with MPI - I HPC Parallel Programing Multi-node Computation with MPI - I Parallelization and Optimization Group TATA Consultancy Services, Sahyadri Park Pune, India TCS all rights reserved April 29, 2013 Copyright

More information

Function call overhead benchmarks with MATLAB, Octave, Python, Cython and C

Function call overhead benchmarks with MATLAB, Octave, Python, Cython and C Function call overhead benchmarks with MATLAB, Octave, Python, Cython and C André Gaul September 23, 2018 arxiv:1202.2736v1 [cs.pl] 13 Feb 2012 1 Background In many applications a function has to be called

More information