--- ---

AMO Development Changes in 2010

The AMO team met in Mountain View last week to develop a 2010 plan. We've been wanting to change some key areas of our development flow for a while but we needed to make sure time was budgeted in the overall AMO and Mozilla goals. As usual, the timeline will be tight, but the AMO developers do amazing work and as our changes are implemented, development should just get faster. I'll give a brief summary of the changes we're planning; a lot of discussion went into this and I'm not going to be able to cover everything here. If you've been in the AMO calls or reading the notes you probably already know most of this. Migrating from CakePHP to Django This is a big undertaking and we've been discussing it for quite a while. We're currently the highest trafficked site on the internet using CakePHP and along with that we've run into a lot of frustrating issues. CakePHP has serviced AMO well for several years, so it's not my intention to bad mouth it here, but I do want to give a fair summary of why we're moving on. Please also note that AMO is still running on CakePHP 1.1 which is, I think, a year out of date? Three substantial issues: Useful Database Abstraction Layer: CakePHP has a concept of database abstraction, but we didn't find it powerful enough. When it did work it would return enormous nested arrays of data causing massive CPU and memory usage (out of memory errors plague us on AMO). When it didn't work, we'd end up doing queries directly which kind of defeats the purpose. We couldn't use prepared statements so we'd have to escape variables ourselves. There was no effective caching built-in and since we just had huge arrays as a response there was no effective way to invalidate the cache we were using (see: Caching is easy; Expiration is hard). The DB layer should return objects that are easy to cache and easy to invalidate. The built-in Django database classes (combined with memcache) should work fine for us here. Effective unit tests: I've beat the drum about our unit tests before but the simple matter is that it's really difficult to do them right with the tools we are using. Our test data is already very limited, but if we try to run all our tests right now they'll run out of memory (and take forever). The CakePHP method of mocking controllers and models was inadequate for what we needed and difficult to deal with. We want our unit tests to run quickly, from the command line, and be independent from each other so there aren't intermittent problems to waste our time with. We'll be using Django's built-in testing framework. Better debugging: Debugging in CakePHP amounts to defining a DEBUG level and seeing what is printed on the screen (usually the giant arrays). We supplemented this with Xdebug where we needed it, but that's still not enough. A framework should have excellent logging and on-the-fly debugging that displays a full traceback (often something will fail deep within CakePHP and we'll get the file/line where PHP gave up, but not the line in our code that started the problem), the values of variables, the page headers, server settings, SQL that was run, what views and elements are in use, etc. We're planning on using a combination of pdb, IPython, and the django-debug-toolbar to make all of this easily accessible while developing. Those are the major issues we're having right now, but if you want to dig into the comparison some more check out our discussion wiki pages, but realize the majority of discussion happened in person. Moving away from SVN We moved AMO into SVN in 2006 and it's treated us relatively well. Somewhere along the line, we decided to tag our production versions at a revision of trunk instead of keeping a separate tag and merging changes into it. It's worked for us but it's a hard cutoff on code changes, which means that while we're in a code freeze no one can check anything in to trunk. As we begin to branch for larger projects this will become more of a hassle, so I'm planning on going back to a system where a production tag is created and changes are merged into it as they are ready to go live. Most of the development team has been using git-svn for several months and, aside from the commands being far more verbose, we haven't had many complaints. We've discovered Git is a much more powerful development tool and we expect to use it directly starting some time next year. As of now, we expect to maintain the /locales/ directory in SVN so this change doesn't affect localizers but we'll keep people notified if there are any changes to that process. Continuous Integration I mentioned excellent testing being one of the reasons we're moving to Django. Along with that testing is the opportunity for continuous integration. We plan on using Hudson as the framework for our continuous integration. With excellent test coverage and quick feedback from Hudson this should drastically lower our regressions and boost our confidence when we deploy. Speaking of which... Faster Deployment For most of 2009 we've pushed on 3 week cycles. 2 weeks of development, 1 week of QA and L10n. Delays and regressions being what they are, I think we averaged a little better than a push a month. This is a fairly rapid cycle for a lot of development shops, but I feel like it's holding us back. We've heard a lot of success stories about shorter cycles and I'd like to aim for deployment (optionally, of course) of a few times per week. By shortening the development cycle we reduce the stress of: the developers: Everyone likes to see what they've done go out quicker and it means less conflicts with others when the patches are smaller. the QA team: Right now we dump 2 weeks of work on them and say we need it done right away. With smaller cycles they can verify small changes as they go and not be overwhelmed. the infrastructure team: Smaller changes means less to go wrong and with a continuous integration server and some automation they can have minimal involvement with the whole process. the localizers: Every time we release we dump a bunch of changes on these fantastic people and tell them we need them back in a week. Most of the time they plow forward and get them done on time. If they don't though, they are stuck with waiting for the next 3 week cycle. If we push often, it's not a big deal. the product managers: These guys come up with crazy ideas for us to implement and then they stare at graphs and numbers to see if it worked. With shorter cycles they can get faster feedback about what works and what doesn't. the users: Faster release cycles means bugs that are fixed in the repository are fixed on the live site sooner. 'nuff said. Process Data Offline Much of AMO relies on cron jobs to get things done. All the statistics, add-on download numbers, how popular an add-on is, all the star rating calculations, any cleanup or maintenance tasks - these are all run via cron and they are so intensive that the database has trouble keeping up. We're planning on utilizing Gearman to farm all this work out to other machines in incremental pieces instead of single huge queries. Any heavy calculating that can be done offline will be moved to these external processors which should help improve the speed of the site and make all our statistics more reliable (as currently the cron jobs have a tendency to fail before they are complete). Improve the Documentation Documentation is a noble goal of many developers but it rarely gets enough attention. We evaluated our current documentation and found it is woefully out of date. By being on a wiki that is rarely used it doesn't get updated except when someone tries to use it and sees it's not right. We're hoping to change that by moving the developer documentation into the code repository itself. We'll be able to integrate with generated API docs, style the docs however we want, and check in changes right along with our code patches. When someone checks out a copy of AMO, they'll get all the documentation right along with it. We'll use Sphinx to build the docs. The outline above details several large, high-level changes but there are a lot of other plans for smaller improvements as well. This post got a lot longer than I was expecting, but I'm really excited about the direction AMO is headed for 2010. As these changes are implemented the site will become more responsive and reliable, and we'll be able to adapt to the needs of Mozilla's users even faster. As always, feedback and discussion are welcome and stay tuned for further back end improvements.

Add-on Localization Completeness Script is on AMO

The add-on verification suite launched a few months ago and has been refined with each subsequent milestone. We've changed what it searches for based on feedback and our own findings and earlier this month we made it available to anyone on AMO, not just a hosted add-on's authors. The framework was written in an extensible way so in addition to tweaking the built-in searches, we could also leverage external scripts. The first such script that is making it to the live site is Adrian Kalla's localization completeness check. This script attempts to parse and record all the English string files as a baseline. Then it looks at each locale and reports any missing files, missing translations, or untranslated strings (translations that exist in the locale but are the same as English). If you validate an extension now and only have partial L10n coverage, scroll down to the new L10n section and you should see something like this: Thanks to RJ and Adrian for doing all the work on this.

Using substitution strings in .po files

A couple years ago I recommended using fake msgid’s in .po files and was, predictably, met with some argument. I suggested using this hack because there wasn’t a standard way to store context in a .po file yet.1

Automating "Thinking of you"

I had an idea a few weeks ago. I've got a bunch of great photos on my computer that no one ever sees unless we meet in person. Sure, we've got flickr and social networking sites, but I'm talking about an old photo that someone only saw once in passing, or a favorite shot from summer while you're huddled over your heater wondering when the sun is coming back. I can look at them any time, and there are a few on flickr that other people can look at, but what about people that aren't tech savvy or are really busy and get caught up in the daily grind? I was thinking, the answer can be a really simple script running from cron, say, weekly. It picks a random photo from a directory and emails it to a group of people. That's it. The idea is: it's low tech compared to RSS feeds and social networking sites. This "just works" with the tools people (potentially, low-tech people) are already used to using. in a similar vein, commenting and discussion are built in if you feel like it. it's focused on the people on the CC list. Sending out an old photo that is relevant just to those people has a lot more effect than one scrolling by on flickr. It's got a personal touch. it's automatic. You wake up Monday morning, drag yourself to work, and there is a photo from 15 years ago in your inbox and you can laugh about how bad your hair was. Awesome. So, I thought I would try it and cooked up audreytoo. Basically, you seed a directory with copies of the pictures you want to send out, add the script to a cron job, and it does it's thing. There are a couple other features in the script that you could look at if you want to get fancier. I realize there is some irony in using an automated script to say "I'm thinking of you" but as long as I'm on the CC list it's still true. :) Anyway, I'm sharing the code in case anyone else can find value in it.

Top 50 searches on addons.mozilla.org

The flight from Portland to San Jose is just about the right length to write some scripts to analyze a bunch of data, make a pretty graph, and then write a blog post drawing fairly obvious conclusions. Someone on IRC said they were interested in the top search terms being used on addons.mozilla.org so here we are. During the week of April 29, 2009 and May 5, 2009 there were around 150000 queries. Of the top 20 queries on addons.mozilla.org (a quick estimate says that is around 12% of the total queries on the site) only 7 actually have search terms. The rest are just choosing different options for the search like category or number of results on a page. If we filter the top queries for ones that include search terms we get a graph that looks like this: All the searches on that page are for the en-US locale unless otherwise noted. It looks like the majority of searches are for specific add-ons but there are also some popular generic terms like download, gmail, and video. I think it's interesting that German was the only other locale to make the list (and fairly high up on the list). Maybe the next stats post will be about overall locale use.