Cloud scripts for Python 3

I promised earlier to update my cloud-related scripts to work on Python 3.  I have just done so and it did not require many changes. From now on I’ll be putting there code for Python 3, not Python 2.

Advertisements

ACM Applications Review

I am member of Association for Computing Machinery. It’s organization providing (among other) access to magazines, news, books, and courses. Access is provided by web pages and through PDFs one can download, but there are also mobile applications. Here I describe 3 of them.

  • CACM providing access to the monthly Communications of the ACM
  • TechNews providing computer-science related news three times a week
  • interactions providing access to interactions, bimonthly magazine for SIGCHI members

All are published under Google Play account Assoc. Computing Machinery but, as can be seen from their identifiers, were made by different companies.

All of them require providing ACM web account login and password to access content CACM login screen.

TechNews

It provides access to news three times a week. There are usually 10 summaries with links to original articles. I find TechNews useful to keep myself up to date with trends in computer science. ACM sends TechNews as an email to its members, and provides

I do not use TechNews application very often. It shows the recent list of titles of mentioned articles TechNews with list of news and one can click on title and go to short text. TechNews with text of one article This workflow is not very well-suiting for me. When I read TechNews from email I scroll and skim over all news. In application I would have to go over all the news, which takes more time than using email.

TechNews application could be more useful to browse archive. Unfortunately design here is rather bad. TechNews screen to choose date of news There are news every Monday, Wednesday, and Friday, except for when there is some holiday in the USA (ACM is located in New York). Unfortunately there is no way of knowing which day of the week we’ve chosen and whether there was newsletter sent on selected day without trying to fetch news – and failing. So after few tries I got discouraged. I do not use ability to bookmark interesting news – to do so I would need to interact with application more often.

Also, UI feels like it’s an iOS application. It might be OK on iOS, but on Android it feels alien and repulsive.

Writing this I realized that I might as well uninstall this application; I’ve started it maybe 3 times in the last year.

CACM

This is mobile version of magazine Communications of the ACM, the flagship publication of organization. It might be preferred way of reading CACM in electronic way. While I download PDFs with interesting articles, I do not like reading on the screen; I already spend too much time in front of computer. That’s why I also do not visit http://cacm.acm.org/.

Application does not display cover of the current issues so sometimes I have problems with getting to know which issue to read.

The first problem I had was with entering passwords. CACM has artificial limit on password length. It accepts only 15 characters, while ACM web account allows for 26 characters. Such lack of following policy is not very nice when trying to access content on mobile device.

The very first screen we see after logging in is not encouraging. CACM screen with list of articles in current issue It’s list of articles from the current issue, but it doesn’t feel like it’s magazine. It’s quite similar to the list of snippets from TechNews. Also many articles are just short pieces linking to web pages CACM short article linking to web page which means that I would need to be always online to use it. As a Luddite I disconnect my phone from the network when I’m not using it. Another problem with following the links is that (just like TechNews) application seems to be coming from iOS and not using Android technologies. Instead of using Application Chooser when following links, it opens its own embedded browser. CACM embedded browser displaying longer version of article It means that I do not have access to my saved passwords, and I cannot save bookmarks.

Embedded browser fails when trying to render HTML. CACM embedded browser displaying web page It seems to have problems with displaying pages and zooming. CACM web browser with page scrolled to the right It is not scrolled page – it was displayed like this, with half of content cut. I was able to scroll right, but not left.

Articles are presented as web pages. Instead of providing images or
tables in the text, or after clicking, they are located at the bottom so
one needs to scroll there, look at them, and scroll back – manually.
Locating tables outside of main text makes sense in paper magazine, but
not in special application, where user is able to click, but have not
have many clues for one’s location in the text.

interactions

I use it most often from all the applications I describe. It feels like the real magazine interactions with covers of available issues as one can see covers of magazines. It also offers ability to download issues for offline reading – downloaded ones are marked with the green triangle. It also offers two modes of reading: like magazine where pages are shown exactly like on paper, and web-page like. I find the latter nice while the former unusable – but maybe it would look better on large tablets.

I like yellow marking of active elements; after displaying new page application highlights elements which will respond to click with yellow. It shows to the user that presented content is interactive, that it’s not just scanned paper magazine.

There are problems though. Even though application caches issues for offline reading, images for the non-magazine layout sometimes go missing. They are displayed on the magazine layout interactions displaying page in printed layout and when online interactions displaying page in web layout, with images but are missing when one is offline. interactions displaying page in web layout, without images . It makes offline mode less usable.

There are other problems with offline mode. When applications sits for few hours in background in offline mode it requires refreshing credentials. interactions requiring logging in It does not check that device is offline and there is no possibility to connect to the server. But hitting Back few times goes to the main screen and one is able to use application again, without need to login. Sometimes application ignores its cached content and behaves as it was started for the first time. interactions requiring download of content In such a case one needs to connect – and after that application can again access downloaded data without any problems.

Application has problems with displaying pages in magazine layout; sometimes instead of displaying pages it display space between pages. interactions displaying hole between pages

Again, just like CACM, interactions uses embedded browser instead of letting user pick one. This is especially funny when there is a link to YouTube, Vimeo or other video site. Embedded browser cannot cope with YouTube movies, so it’s is more frustrating than when reading paper magazine, where it’s natural that we cannot see the movie interactions displaying YouTube page in embedded browser.

Summary

It’s good that companies and organizations are providing mobile applications. But applications should provide more than web pages. For now TechNews is just like mobile page, but in its own sandbox.

Applications should also integrate with the platform. Both CACM and interactions behave like there were written for iOS and then ported to Android without taking platform specificity into consideration. Using non-standards icons for sharing content and embedding browser instead of using system one brings feeling that something is wrong.

Applications also should feel like they are really part of company. Although CACM and interactions both are supposed to present magazines, they are completely different. They differ in how they present content, how they allow for browsing for archival issues, whether they allow for offline access. Lessons from interactions were not incorporated into CACM.

Applications also should integrate with environment. Both applications provide content from Digital Library and require logging. But when one saves bookmark or article there is no integration with Digital Library personal bookshelves.

Basically it looks like each application is its own serfdom. They are written by different companies (which can be seen from their IDs) and there is no knowledge transfer between them. There is no one person, committee or group in ACM responsible for mobile content. Described applications are published under account Assoc. for Computing Machinery. Recently there appeared application allowing for access to Digital Library (thus duplicating part of functionality of two described applications) from separate account Association for Computing Machinery. I find it strange, confusing, and meaning that nobody at ACM is able to deal with this mess.

Two keynotes

To keep myself up to date I like to watch presentations from various conferences. Some time ago  watched two keynotes: one from AWS re:Invent 2013, and another from Samsung Developers Conference. Both conferences were intended for developers to know new offerings of the companies, so keynotes were presenting new products and SDKs, and both included partners using mentioned SDKs in their own products.

Werner Vogels, Amazon CTO, presented re:Invent keynote. He presented interesting products: inclusion of PostgreSQL into Amazon RDS (finally!), Kinesis – new tool for analysing streams of data and CloudTrail, giving ability to record all AWS API calls into S3, allowing for better auditing of operations in the cloud.

But there was one moment which raised my hair.  At 1:22:55 Vogels pointed to something he was wearing on his suit, and informed everyone that it is Narrative Clip made by company from Sweden – a camera which takes photo every 30 seconds and uploads it into Amazon S3. It is interesting usage of technology and I can see why he was eager to show it.

But Vogels told that he was wearing it all the time at the conference, during preparing of his talk, when talking with people, and so on. And this is when I felt strong disagreement with his eagerness to wear it. I felt as he betrayed trust of all the people who interacted with him. I know that at the conference there is no expectation of privacy, with everyone making photos, press teams making videos and promotional clips, and anyone able to overhear each other conversations. But in my opinion this is different. There is difference between having conversation and someone hearing it, and having conversation where other party records it. The latter brings lack of trust. There is reason why those are called “private conversations”. I’m sad that we, so rushing to try new technological gadget, like this Narrative Clip or Google Glass, seem to lose this trust in interpersonal relationships. Knowing that what I say and how I look could be exported to the cloud for all the worlds (or at least all the governments) to see means that I’ll not be sincere and instead of telling what I mean I’ll be thinking how what I say might be used against me now – or in a few years time. This is basically as if all the time I would be under Miranda warning – “everything what you say (or do) might be used against you”, and not only in official situations, but in (supposedly) innocent talk with other person.

Samsung keynote was presented by 6 to 8 Vice Presidents from Samsung (I lost count), and people from partner companies. Lack of one main presenter and trying to squeeze many unrelated products into one talk meant that I had no feeling of continuity I had watching re:Invent keynote.

This keynote also brought some privacy-related concerns, caused by Eric Edward Andersen, Vice President for Smart TV, presenting Smart TV SDK 5.0. He started his part of talk talking about emotional connection, about emotions related to interacting with content on TV screen. Then he presented new TV with quad-core CPU, which is apparently needed because “it’s (TV) is learning from your behaviour”. Do I really want for my TV to learn my behaviours? All the existing technologies assume that my taste is constant, and as soon as technology learns my behaviour and what I like it can start showing me what it suspects I like. But what about discovering new things? What about growing in life? YouTube tries to propose me some things it considers I might find interesting. One of the problems with it is that it tends to stick with some things I watched in the past. There was channel I was watching for some time and then stopped – but YouTube still puts it into proposals, after few months. At the same note, Google integration of services is really scary. I opened page about anime using Chrome (not my usual browser) and now YouTube proposes my anime to watch. OK, I might even find it interesting, but why it proposes those anime in italian?

Possible privacy violation was mentioned later, at 39:06. Andersen has shown some numbers for how long people are interacting with different applications on their smart TVs, for example how long Hulu or Netflix sessions are. I think the main idea was to show programmers that people are spending much time in front of TV, interacting with different applications and consuming content, so it would be wise to write software for smart TVs. But I had different feeling. Samsung having this data means that TV sends back information about usage to the mothership; Andersen mentioning how many people are “activating” their TVs seems to confirm this. LG was accused that their TVs are spying on users and sending data to the company; it looks like Samsung does something similar.

After seeing this, I am left wondering what is the advantage of smart TV? Why would one want to buy such TV to have it spying all the time? Orwell described quite well modern “Smart TV” in novel 1984 – he called them telescreens. Only inner-party members were able to turn off telescreens, and even they could not be sure whether device is still spying on them.

Another part of the presentation was given by Injong Rhee, Senior Vice President for Enterprise Mobile Communication Business. He was talking about Samsung KNOX, solution to help with managing devices for companies needs. This part of the presentation starts at 1:15:37. Rhee describes history of making KNOX:

What I have done.. I took my team to the drawing board to start reengineering and redesigning security architecture of Android. That’s how Samsung KNOX is born.

and

We actually put security mechanisms in each of those layers

and

We have implemented property called Mandatory Access Control or MAC (..) Security Enhancements for Android

and then describes difference between MAC and traditional triplets owner/group/other and read/write/execute.

what we have done with the MAC is that we define which system resources the process can access

Basically it sounds like ordinary Security Enhanced Linux, available in Android since 4.3 (“Android sandbox reinforced with SELinux”)

Then Rhee presents Dual Personas – availability to have separate user accounts on one device. This is also functionality available in Android – separate user accounts are available in Android 4.2, ability to add restrictions to accounts available in Android 4.3 (“Support for Restricted Profiles”).

It left me with strange feeling. I do not know what is so unique with KNOX, as it just seem to be a different name for features already available in Android 4.3 – and, what a coincidence – KNOX is also available for Samsung devices with Android 4.3. Samsung probably added some interesting features and functionalities in the KNOX (maybe ability to manage those policies by management), but presentation did not distinguish between features added by KNOX and available in pure Android. This seems strange presented by Rhee who presented himself as former university professor. As the former professor he should know how to give proper attribution, how to cite others’ work, and how to mention what is unique in his work.

I noticed another strange manner. Samsung seem to have opinion that good API is large. Of course having rich enough set of components not restricting programmer is the sign of good API. On the other hand overgrown API means that there is to many things to remember, and it makes programming harder than it should be. Rhee, when talking about KNOX, described it (1:25:55) as “KNOX API which covers over 1000 APIs or more”, with slide containing “KNOX SDK: 1090+ APIs for Remote Device Control”. What does it really mean!? API (Application Programming Interface) is one – and it is set of types, classes, structures, methods, and so on. What does Samsung means by API then?

It seems that Samsung engineers are pumping numbers just to be able to show impressive, overgrown numbers. Samsung seems to have troubles with having to many devices and to many versions to manage. They even have troubles with updating their own devices. Combined with “me too” attitude (e.g. promising to use 64-bit CPUs in mobile phones after Apple presents 64-bit iPhone) it does not bring confidence in their ability to develop presented technologies, and (for example) to keep their smart TVs up to date. Unlike phones, which are (at least in Poland) changed every 18 or 24 months, during signing new contracts, TVs are changed less often. And people will grow disappointed when there is no update to their TVs, and each month something will stop working: YouTube changes video codecs and you cannot watch movies from the internet, Skype changes protocol and suddenly you cannot call people, and so on. Basically “smart” appliances need much more after-sale care than dumb ones, and companies (except for Apple which provides updates for their phones far longer than other phone manufacturers) do not seem to realize this.

Although there are some trends I strongly disagree with I’m glad that I have watched those keynotes.  We definitely live in fast-paced times and although I’ve stopped trying to catch up with all the new technologies I think it is important to keep eye on what is proposed by various companies.

AWS Summit in Berlin

We’ve had two days of holidays in Poland at the beginning of May: Work Day on 1st of May and Constitution Day on 3rd of May. Most of the people used this time to visit families, make grill and so on; I decided to take 2nd may off and go to the Berlin to Amazon Web Services Summit.

AWS Summit was held in Berliner Congress Center at Alexanderplatz.  This is the same place which hosted Chaos Communication Congress for many years so it brought back some memories.

Again there was a queue for the entrance, although it was shorter than before Congress. On the other hand there was no Heart of Gold, nor blinking lights in the window. BCC looked professional. Also, there were security guards checking our bags during entrance. I wonder what they were looking for…

Inside there were again some similarities, inevitable at the event with over three thousand people (as organizers have not published official attendance numbers, I am estimating based on how crowded lecture rooms were): there were long queues to the WC, queues for food, people eating in all the places (on the stairs, etc.). Because this was computer event, most of the people were not very social, eating on their own, not trying to make contact; this changed after closing event, when there was beer provided 😉

As AWS Summit was professional event, not hacker congress, there were differences.  Food court looked empty, even boring, without blinken lights: it became yet another place to have your lunch. Instead of Engels there were hostesses; the good part was that they were nicely dressed (not underdressed like the ones at Confitura 2012).

I would never thought that I would say this but I somehow missed Nick Farr shouting from the stage to raise hand if someone has free seat; I had less feeling of community, less eagerness to make more room for fellow hackers to have place to sit down and listen to the lecture.  But the talking with people during breaks and after closing were as interesting as during other conferences.

Oh, the lecture rooms… Just like during CCC there were problems with more people wanting to listen to the lecture than could fit into the room. People were waiting in the queues, and some were not let in for the few interesting talks due to lack of space. Someone (I do not know who – there was too much crowd) joked that “they should just instantiate another room to have talk during such high demand”. Yeah, this shows that clouds cannot solve limitations of physical world. The difference from CCC was that there was no streaming of talks so those of us who failed to have a seat did not have a chance to watch it.

Organizers wanted to count attendance. They did not used Sputniks; instead our badges had barcodes printed on them and poor hostess had to scan all people entering the room. It was not foolproof – e.g. I entered lecture room earlier (during lunch break) to have place to sit, so I was not counted as attending that talk.

Most of the talks were about technical details of AWS.  I will just mention few interesting thought from keynotes.  Werner Vogel, CTO of amazon.com, mentioned something along the lines “just like Human Resources employs people when they are needed and reduces them during lower demand you can do with your computing capabilities”.  I do not want to be treated as commodity (or resource to be managed for that matter), and I repeat after The Prisoner: “I am not a number!”.  I believe that treating people, employees, as commodity, is part of the problem with economy today.  This is specially ironic when told on 2nd May, day after May 1st, the International Work Day.

On the other hand Nikolai Longolius, CEO of Schnee von morgen Web TV, made me feel old. He used the phrase: “we started with the cloud in 2006, so we are grandfathers”.  Other speakers were also using phrases “it is old way of computing, used in 1990s or 2000s”. Hey, I know that in computers time flows faster, but it might be good idea to stop from time to and look whether the past offers us some important lessons.

In summary, I’m glad I attended the Summit. I learned a lot, and talking with people responsible for example for Glacier helped me understand it better and fix some of my scripts. I met some interesting people attending Summit.  It also helped me see Congress from different perspective and changed my expectations about OHM 2013. I am waiting for it impatiently as it’s only 3 months from now!

A.M. Turing award lectures

I’ve just watched two lectures given by laureates of ACM Turing awards. First lecture was given by Barbara Liskov in 2009 and the second lecture was given by Chuck Tucker in 2010. Both lectures contain many interesting topics, and I do not want to merely summarize them as it would be disservice to presenters. Instead I’ll just focus on one aspect of computer science present in both.

They both talk about past experiences of developing computer science. Liskov describes how she was involved in implementation of CLU programming language. She describes how the situation looked in 1960s and 1970s regarding programming languages. There were many different programming languages and they offered different choices. For example there were many approaches to exception handling. One approach was termination (known today) and another was resurrection; after handling exception code could order returning to procedure which caused exception. One could also use FAILURE exceptions, which were something similar to today’s Java runtime exceptions but one could change any exception into failure, putting original exception as argument of failure (something similar to today’s exception wrapping). There was also special Guardian module which was responsible for catching uncatched exception which seems similar approach known from Virtual Machines but each unit (module) could have its own Guardian so exceptions were confined inside modules. She describes implementation of iterators which seem similar to Python’s generators with yield; even the way of implementing of iterators seems similar to how generators were implemented in Python. First there were just generators (PEP 255), and then they were extended to allow for coroutines (PEP 342). Liskov stopped before implementing coroutines. Python is going even further with using subgenerators (PEP 380) and allowing for using generators with asynchronous programming (currently discussed PEP 3156). Liskov said that CLU was “way ahead of its time” – and it is true. Only today we can see implementation of its concepts.

Another concept described by Liskov which now is implemented in current programming language is collections with WHERE. It is similar to templates e.g. from C#, with restrictions posed on parameters. In CLU parameter had to implement some methods (concept similar to duck typing), in C# one requires that argument implements some interface. It feels strange to see concept from 1970s re-discovered and implemented only in 2006.

But it gets more clear after watching Tucker’s lecture. He notes that “Computer science is very forgetful discipline”. Tucker talks about all the walls we are facing today – memory wall, power wall, complexity wall. His entire lecture is about history and its influence today. We (computer science people) made some choices back then and now we live with those choices. This can be seen when looking for example at BIOS – only now we can see migration to UEFI, but not without many problems (see Matthew Garret work ). Tucker uses interrupts as example of such legacy of the past limitation. Interrupts made sense in single-core system but now complicate things and make no sense in multi-core. Also this can be seen when looking at computer languages today – I do not know language which implements resurrection exceptions mentioned by Liskov. Tucker wonders how would we make those choices today, given current knowledge and technology.

Many choices Tucker mentions were made as the result of scarcity. He mentions problems with shared memory and message passing. Shared memory was easier to implement so it was the probable reason that it was chosen over messaging – but now it poses problems with coherency on multi-core chips. Well known example is virtual memory which was the result of small amount of RAM and necessity to spill this RAM into disk. Now we have much memory so swap is not needed (e.g. Android does not use swap; on the other hand Nokia n900 had swap implemented on the flash). Needing to virtualise memory to be able to find large continuous RAM chunk can be solved by Garbage Collector… So in theory we could resign from soma parts of current memory layout in the hardware and operating systems. I do not agree with Tucker that we should also resign from protection given by virtual memory; he mentions that this should be solved by using safe languages (i.e. not C) but we also need to deal with rogue programs and multi-user environment – so I think having protection offered by Virtual Memory is a Good Thing(TM).

Another problem which we experience only today is related to threads and locking. In the past systems were not large enough to show problems with locking – i.e. problems with composing them and lack of ability to compose smaller systems. Tucker does not believe in Transaction Memory. He does not like transactions because they are speculative and we do not have many experience with transactions. We use them in databases, but not in smaller scale with multiple threads on CPU level.

Tucker notes that 1950s and 60s were age of experimentation, then 70s and 80s were the period of consolidation and warfare (only few of the existing solutions left – he was talking about CPUs but the same is true for programming languages) and 90s were Instruction Level Parallelism. But there is not much ILP in most of the programs which can be done automatically. Looking at my experience with programming GPGPU one needs to work on program to get performance gains, and not much can be done just by the compiler, without programmer intervention. Also, there is not many applications which can use many cores; one does not need dozens of cores to watch video or write email.

The problem is that we learned to live with those limitations and many of those work well enough and cost of changing to something better is too high. We might need to rethink all those decisions; not necessarily change them – but decide again, in today’s situation and with current knowledge.

Problem with changing such widely used solutions as interrupts is that it is costly now and will bring profits in the long term. Need for many players to agree does not help. At the same time this might be good field for experimentation using virtual machines or various free software solutions. One can experiment with existing code on new architectures – just recompile programs or implement virtual machine on new architecture. There is also question how much control to give programmers. Too much and programs will be hard to port. Too little and they will have problems with getting performance. For example new Exynos 5 CPU offers 4 fast cores and 4 slow cores; system chooses which cores to use depending on the load. But what if I, the developer, want to use 1 fast core and 1 slow core, or some other combination? I’ll not have this ability from Android level – Dalvik operates on higher level than that.

I agree with Tucker that we live in very interesting times for Computer Science. It looks like we need to rethink some of the basics of systems – and maybe repeat such a process every time we have new hardware (for example GPGPU, heterogeneous chips, and so on). The problem is that we, as the discipline, have forgotten many of possibilities discovered and abandoned in the past – and those possibilities might be more relevant today. Python’s example however shows that good ideas survive and gets implemented, even if it takes many years.

On mobile phones in Poland

This is different from my usual posts about programming, GPU, etc. so you can skip this if you are not interested in mobile plans.

Recently I’ve decided to change my phone. I have Nokia n900 which was good and promising phone. I was using it less and less as a smartphone though – its browser had problems dealing with web pages and there were not many applications for it. Even computer-related conferences publish application for Android and iPhone and none for n900. My ties with Maemo community are very weak now – I haven’t logged into my Maemo accounts for months. So I have decided to go with the crowd and buy Android device. This meant some changes and two recent posts by Russell Coker, one about international calls and another about changing format of SIM card struck the chord so I have decided to write this post.

I had to get new SIM card because my new device has microSIM slot. Unfortunately only one mobile company in Poland changes them for free – in other you need to pay for new SIM, up to 50PLN+VAT (about 15EUR). So I decided to try to switch mobile providers – I would need to pay less when I sign new contract than when I am the customer (yes, I also think this is stupid policy; keeping existing customer vs. acquiring new one,..). One can keep phone number while changing mobile providers in Poland so I was less hesitant to try new company. I am not very social person, so I do not need many “free” minutes – but I wanted to have large Internet quota. I was visiting sales representatives of all mobile providers and was telling them:

I do not need many minutes – 60 per month is enough. I do want large internet packet though – something like 500MB to 1GB. Oh, if you have some plan with more minutes which can be used to call internationally I’ll gladly take it.

I was telling this in Polish, my native language, and all the sales people were also Polish – so there should be no language barrier. There was. I was getting responses like:

We have wonderful plan for you. You’ll get 200 minutes, and because you are moving your number to our network you’ll get 30% more minutes. Internet – oh, you need to buy this additional internet package which contains 200MB. As for international calls, we do not have anything like this so you will be paying maximum rate per minute allowed by EU. Are you ready to sign?

And all of that for 2-3 times more than what I was paying. There was one plan with international minutes, but this was very expensive and only for companies; I would need to buy phone for the company, not for personal use. So I decided to stay with my current company. As I was renewing my contract I even got new microSIM free of charge.

It seems like Polish mobile providers are still living in the past thinking that all customers want just one thing: more and more minutes. Even though some companies are now parts of international networks (we have Orange and T-Mobile) potential customer rarely sees advantages of being customer or international company. I was using Orange when I was in Switzerland. The plan I was using had 1GB of internet access and minutes for EU, USA, and Canada. (Funny fact: calls to Poland did not use minutes included in plan and I had to pay for them additionally so it seems that Poland is not part of EU according to Orange Switzerland). There are some signs of change though: Orange Poland recently started offering plans with included minutes to EU and USA – but only for landlines.

In summary, answering Russell Coker’s questions:

  1. New formats of SIM cards are for the mobile providers to charge customers for changing their SIMs or forcing them to renew contracts.
  2. People do not call internationally because many plans do not offer cheap international calls. People who have many international contact tend to use VoIP or similar solutions, avoiding paying telecoms.

Hello world!

Hello.

My name is Tomasz Rybak. I am currently working at University of Geneva as research assistant. My main responsibility is to allow for Palabos, Lattice Boltzmann library written in C++, to use Sailfish, Lattice Boltzmann library written in Python using PyCUDA and PyOpenCL so Lattice Boltzmann simulations can be executed on both CPU and GPU at the same time.

I am maintaining 3 Debian packages:

I am also active in PyCUDA and PyOpenCL communities.