During the last three months I have given three presentations.
On September 16, during PyConPL 2012, I gave presentation “Asynchronous and event-driven PyOpenCL programming”. I have shown how to use events and queues to asynchronously call OpenCL code in Python and how this can help to use PyOpenCL in common programs, not only for scientific purposes.
On October 21 I gave presentation “PyOpenCL – unleash your GPU with the help of Python” at PyCon Ukraine 2012 in Kyiv. I started with short introduction to OpenCL and PyOpenCL and again tried to convince audience that GPGPU can be used in ordinary programs, especially with the help of PyOpenCL and its high-level features like reduction or parallel prefix scan.
The last of the presentations I gave was on November 12 at PyWaw 18. It was not programming-related presentation – I talked about PyCon Ukraine, my remarks, how it went, and so on.
During and after my talks at PyConPL and PyCon Ukraine I got questions related to GPU programming. Listeners were asking about debugging and profiling of code. I also got some questions about performance differences between OpenCL and CUDA. One very interesting question was about existence of some library of kernels (either for CUDA or for
OpenCL) with the most common functions and computations. PyOpenCL contains some features (like mentioned reduction or prefix sum) but I have not heard about CPAN-for-GPGPU. It might be a good concept though.
In summary, there was some interest in GPGPU, but during the time since my presentation I have not seen much new discussions on PyCUDA or PyOpenCL mailing lists. This means that either I am not so good speaker ( 🙂 ) or GPGPU is still considered niche topic.