Maintaining localization between Python and PHP (it's not fun)

I reached my hand into the barrel of problems our migration to Python is going to cause and came up with Localization. It figures. First out of the chute was the .po files. It turns out the actual formatting is different between the two languages. PHP uses %1$s for its substitutions, but python uses either named variables like (num)s or integers like {0}. For the record, they both support %s when you don't need to order the substitutions. PHP example: I have %2$s apples and %1$s oranges Python example: I have {1} apples and {0} oranges Since I've worked with the Translate Toolkit before, I decided to write a script to convert between the two formats. If you find yourself in the same unfortunate boat as me, behold phppo2pypo and pypo2phppo to convert between the two types. Crisis averted, right? Oh, that's just scratching the surface. Remember how happy I was that PHP finally started supporting msgctxt? Well, Python has had a patch for it since 2008 but no one has bothered to land it. I wrote a new ugettext() and ungettext() that recognizes context in the .po files. To use simply do: from l10n import ugettext as _ at the top of your file. Along with adding msgctxt support, those two functions also collapse consecutive white space. We're using Jinja2 with Babel and the i18n extension as our template engine. Jinja2 has a concept of stripping white space from the beginning or end of a string but does nothing about the middle. A paragraph of text in a Jinja2 template would look like: {% trans -%}Mozilla is providing links to these applications as a courtesy, and makes no representations regarding the applications or any information related thereto. Any questions, complaints or claims regarding the applications must be directed to the appropriate software vendor. {%- endtrans %} That's a decent looking template, right? Yeah, well, when Babel extracts that, it includes all the line breaks too, giving you something like this. The localizers would revolt if I sent them that, so I added in auto white-space collapsing. Getting Babel to use the new functions means a new extraction script. At this point, we're extracting strings from our new code and we can convert between Python and PHP files. All we need now is a Frankenstein mix of xgettext functions to act as glue. Meet the amalgamate script that uses the pypo2php scripts, concatenates the .pot files, and merge updates each locales .po file. After that it's quick tweaks to the build scripts to create z-messages.po files and we're done. So, all that said, the new process for L10n, while we're in this transitional phase, is: From the PHP code, run locale/extract-po-remora.sh. That pulls everything from all the PHP files, creates locale/r-keys.pot, updates the messages.po file for each locale, and compiles them. Life used to be so simple. From the python code, make sure you're up to date, then run ./manage.py extract. That will pull everything from the python code and templates and create locale/z-keys.pot. Run ./manage.py amalgamate. That will merge the z-keys.pot into the PHP messages.po files. Localizers can make their changes as usual, and commit back to messages.po. From PHP, locale/copy-to-zamboni.py locale will create z-messages.po files in the Python format. We could skip right to .mo files, but in case something goes wrong I want to see the .po files. Then, like today, locale/compile-mo.sh locale will compile all the .po files. After all those steps are done, we've got duplicate .mo files, aside from formatting, and each application can look at its own .mo to get the strings it needs. All this code is just a big band-aid and there are plenty of things that are more fun than juggling L10n between two applications across two RCSs. But we knew what we were getting in to. I'll post something more positive later to help justify it. :)

AMO Development Changes in 2010

The AMO team met in Mountain View last week to develop a 2010 plan. We've been wanting to change some key areas of our development flow for a while but we needed to make sure time was budgeted in the overall AMO and Mozilla goals. As usual, the timeline will be tight, but the AMO developers do amazing work and as our changes are implemented, development should just get faster. I'll give a brief summary of the changes we're planning; a lot of discussion went into this and I'm not going to be able to cover everything here. If you've been in the AMO calls or reading the notes you probably already know most of this. Migrating from CakePHP to Django This is a big undertaking and we've been discussing it for quite a while. We're currently the highest trafficked site on the internet using CakePHP and along with that we've run into a lot of frustrating issues. CakePHP has serviced AMO well for several years, so it's not my intention to bad mouth it here, but I do want to give a fair summary of why we're moving on. Please also note that AMO is still running on CakePHP 1.1 which is, I think, a year out of date? Three substantial issues: Useful Database Abstraction Layer: CakePHP has a concept of database abstraction, but we didn't find it powerful enough. When it did work it would return enormous nested arrays of data causing massive CPU and memory usage (out of memory errors plague us on AMO). When it didn't work, we'd end up doing queries directly which kind of defeats the purpose. We couldn't use prepared statements so we'd have to escape variables ourselves. There was no effective caching built-in and since we just had huge arrays as a response there was no effective way to invalidate the cache we were using (see: Caching is easy; Expiration is hard). The DB layer should return objects that are easy to cache and easy to invalidate. The built-in Django database classes (combined with memcache) should work fine for us here. Effective unit tests: I've beat the drum about our unit tests before but the simple matter is that it's really difficult to do them right with the tools we are using. Our test data is already very limited, but if we try to run all our tests right now they'll run out of memory (and take forever). The CakePHP method of mocking controllers and models was inadequate for what we needed and difficult to deal with. We want our unit tests to run quickly, from the command line, and be independent from each other so there aren't intermittent problems to waste our time with. We'll be using Django's built-in testing framework. Better debugging: Debugging in CakePHP amounts to defining a DEBUG level and seeing what is printed on the screen (usually the giant arrays). We supplemented this with Xdebug where we needed it, but that's still not enough. A framework should have excellent logging and on-the-fly debugging that displays a full traceback (often something will fail deep within CakePHP and we'll get the file/line where PHP gave up, but not the line in our code that started the problem), the values of variables, the page headers, server settings, SQL that was run, what views and elements are in use, etc. We're planning on using a combination of pdb, IPython, and the django-debug-toolbar to make all of this easily accessible while developing. Those are the major issues we're having right now, but if you want to dig into the comparison some more check out our discussion wiki pages, but realize the majority of discussion happened in person. Moving away from SVN We moved AMO into SVN in 2006 and it's treated us relatively well. Somewhere along the line, we decided to tag our production versions at a revision of trunk instead of keeping a separate tag and merging changes into it. It's worked for us but it's a hard cutoff on code changes, which means that while we're in a code freeze no one can check anything in to trunk. As we begin to branch for larger projects this will become more of a hassle, so I'm planning on going back to a system where a production tag is created and changes are merged into it as they are ready to go live. Most of the development team has been using git-svn for several months and, aside from the commands being far more verbose, we haven't had many complaints. We've discovered Git is a much more powerful development tool and we expect to use it directly starting some time next year. As of now, we expect to maintain the /locales/ directory in SVN so this change doesn't affect localizers but we'll keep people notified if there are any changes to that process. Continuous Integration I mentioned excellent testing being one of the reasons we're moving to Django. Along with that testing is the opportunity for continuous integration. We plan on using Hudson as the framework for our continuous integration. With excellent test coverage and quick feedback from Hudson this should drastically lower our regressions and boost our confidence when we deploy. Speaking of which... Faster Deployment For most of 2009 we've pushed on 3 week cycles. 2 weeks of development, 1 week of QA and L10n. Delays and regressions being what they are, I think we averaged a little better than a push a month. This is a fairly rapid cycle for a lot of development shops, but I feel like it's holding us back. We've heard a lot of success stories about shorter cycles and I'd like to aim for deployment (optionally, of course) of a few times per week. By shortening the development cycle we reduce the stress of: the developers: Everyone likes to see what they've done go out quicker and it means less conflicts with others when the patches are smaller. the QA team: Right now we dump 2 weeks of work on them and say we need it done right away. With smaller cycles they can verify small changes as they go and not be overwhelmed. the infrastructure team: Smaller changes means less to go wrong and with a continuous integration server and some automation they can have minimal involvement with the whole process. the localizers: Every time we release we dump a bunch of changes on these fantastic people and tell them we need them back in a week. Most of the time they plow forward and get them done on time. If they don't though, they are stuck with waiting for the next 3 week cycle. If we push often, it's not a big deal. the product managers: These guys come up with crazy ideas for us to implement and then they stare at graphs and numbers to see if it worked. With shorter cycles they can get faster feedback about what works and what doesn't. the users: Faster release cycles means bugs that are fixed in the repository are fixed on the live site sooner. 'nuff said. Process Data Offline Much of AMO relies on cron jobs to get things done. All the statistics, add-on download numbers, how popular an add-on is, all the star rating calculations, any cleanup or maintenance tasks - these are all run via cron and they are so intensive that the database has trouble keeping up. We're planning on utilizing Gearman to farm all this work out to other machines in incremental pieces instead of single huge queries. Any heavy calculating that can be done offline will be moved to these external processors which should help improve the speed of the site and make all our statistics more reliable (as currently the cron jobs have a tendency to fail before they are complete). Improve the Documentation Documentation is a noble goal of many developers but it rarely gets enough attention. We evaluated our current documentation and found it is woefully out of date. By being on a wiki that is rarely used it doesn't get updated except when someone tries to use it and sees it's not right. We're hoping to change that by moving the developer documentation into the code repository itself. We'll be able to integrate with generated API docs, style the docs however we want, and check in changes right along with our code patches. When someone checks out a copy of AMO, they'll get all the documentation right along with it. We'll use Sphinx to build the docs. The outline above details several large, high-level changes but there are a lot of other plans for smaller improvements as well. This post got a lot longer than I was expecting, but I'm really excited about the direction AMO is headed for 2010. As these changes are implemented the site will become more responsive and reliable, and we'll be able to adapt to the needs of Mozilla's users even faster. As always, feedback and discussion are welcome and stay tuned for further back end improvements.

Add-on Localization Completeness Script is on AMO

The add-on verification suite launched a few months ago and has been refined with each subsequent milestone. We've changed what it searches for based on feedback and our own findings and earlier this month we made it available to anyone on AMO, not just a hosted add-on's authors. The framework was written in an extensible way so in addition to tweaking the built-in searches, we could also leverage external scripts. The first such script that is making it to the live site is Adrian Kalla's localization completeness check. This script attempts to parse and record all the English string files as a baseline. Then it looks at each locale and reports any missing files, missing translations, or untranslated strings (translations that exist in the locale but are the same as English). If you validate an extension now and only have partial L10n coverage, scroll down to the new L10n section and you should see something like this: Thanks to RJ and Adrian for doing all the work on this.

Using substitution strings in .po files

A couple years ago I recommended using fake msgid’s in .po files and was, predictably, met with some argument. I suggested using this hack because there wasn’t a standard way to store context in a .po file yet.1

Automating "Thinking of you"

I had an idea a few weeks ago. I've got a bunch of great photos on my computer that no one ever sees unless we meet in person. Sure, we've got flickr and social networking sites, but I'm talking about an old photo that someone only saw once in passing, or a favorite shot from summer while you're huddled over your heater wondering when the sun is coming back. I can look at them any time, and there are a few on flickr that other people can look at, but what about people that aren't tech savvy or are really busy and get caught up in the daily grind? I was thinking, the answer can be a really simple script running from cron, say, weekly. It picks a random photo from a directory and emails it to a group of people. That's it. The idea is: it's low tech compared to RSS feeds and social networking sites. This "just works" with the tools people (potentially, low-tech people) are already used to using. in a similar vein, commenting and discussion are built in if you feel like it. it's focused on the people on the CC list. Sending out an old photo that is relevant just to those people has a lot more effect than one scrolling by on flickr. It's got a personal touch. it's automatic. You wake up Monday morning, drag yourself to work, and there is a photo from 15 years ago in your inbox and you can laugh about how bad your hair was. Awesome. So, I thought I would try it and cooked up audreytoo. Basically, you seed a directory with copies of the pictures you want to send out, add the script to a cron job, and it does it's thing. There are a couple other features in the script that you could look at if you want to get fancier. I realize there is some irony in using an automated script to say "I'm thinking of you" but as long as I'm on the CC list it's still true. :) Anyway, I'm sharing the code in case anyone else can find value in it.