08 Dec 2014
A common aspect amongst the regions Firefox OS targets is a lack of dependable
bandwidth. Mobile data (if available) can be slow and expensive, wi-fi
connections are rare, and in-home internet completely absent. With the lack of
regular or affordable connectivity, it’s easy for people to ignore device and
app updates and instead opt to focus on downloading their content.
In the current model, Firefox OS daily pings for system and app updates
and downloads the updates when available. Once the update has been
installed, the download is deleted from the device storage.
What if there was an alternative way to handle these numerous updates? Rather
than delete the downloads, they are saved on the device. Instead of each Firefox
OS device being required to download updates, the updates could be shared with
other Firefox OS devices. This Goodwill Update would make it easier for people
to get new features and important security fixes without having to rely on
Goodwill Update could either run in the background (assuming there is disk space
and battery life) or could be more user-facing presenting people with
notifications about the presence of updates or even showing how much money has
been saved by avoiding bandwidth charges. Perhaps they could even offer to buy
Bob a beer!
Would this be worth doing to help emerging markets stay up to date?
PS. Hat tip to Katie and Tiffanie for the image and idea help.
18 Nov 2014
When we run ALTER statements on our big tables we have to plan ahead to keep
from breaking whatever service is using the database. In MySQL, many times* a
simple change to a column (say, from being a short varchar to being a text
field) can read-lock the entire table for however long it takes to make the
change. If you have a service using the table when you begin the query you'll
start eating into your downtime budget.
If you have a large enough site to have database slaves you'll have a
double-whammy - all reads will block on the master altering the table, and then,
by default, the change will be replicated out to your slaves and not only will
they read-lock the table while they alter it, but they will pause any further
replication until the change is done, potentially adding many more hours of
outdated data being returned to your service as the replication catches up.
The good news is, in some situations, we can take advantage of having database
slaves to keep the site at 100% uptime while we make time consuming changes to
the table structure. The notes below assume a single master with multiple
independent slaves (meaning, the slaves aren't replicating to each other).
Firstly, it should go without saying, but the client application needs to
gracefully handle both the existing structure and the anticipated structure.
When you're ready to begin, pull a slave out of rotation and run your alter
statement on it. When it completes, put the slave back into the cluster and let
it catch up on replication. Repeat those steps for each slave. Then failover
one of the slaves as a new master and pull the old master out of rotation and
run the alter statement on it. Once it has finished put it back in the cluster
as a slave. When the replication catches up you can promote it back to the
master and switch the temporary master back to a slave.
At this point you should have the modified table structure everywhere and be
back to your original cluster configuration.
Special thanks to Sheeri who explained how to do all
the above and saved us from temporarily incapacitating our service.
*What changes will lock a table vary depending on the version of MySQL. Look
for "Allows concurrent DML?" in the table on this manual page.
31 Oct 2014
Jared, Stuart, and Andy recently spent some time focusing on one
of the Marketplace's biggest hurdles for new contributors: how do I get all
these moving pieces set up and talking to each other?
I haven't written a patch for the Marketplace in a while so I decided to see
what all the fuss was about. First up I, of course, read the installation
documentation. Ok, I skimmed it, but it looks pretty straight forward.
Step 1: Install Docker
I'm running Ubuntu so that's as easy as:
sudo apt-get install docker.io
To fix permissions (put your own username instead of clouserw):
sudo usermod -a -G docker clouserw
Step 2: Build Dependencies
The steps below had lots of output which I'll skip pasting here, but there were
no errors and it only took a few minutes to run.
$ git clone https://github.com/mozilla/wharfie
$ cd wharfie
$ bin/mkt whoami clouserw
$ bin/mkt checkout
$ mkvirtualenv wharfie
$ pip install --upgrade pip
$ pip install -r requirements.txt
$ sudo sh -c "echo 127.0.0.1 mp.dev >> /etc/hosts"
Step 3: Build and run the Docker containers
I ran this seemingly innocent command:
$ fig build
And 9 minutes and a zillion pages of output later I saw a promising message
saying it had successfully built. One more command:
$ fig up
and I loaded http://mp.dev/ in my browser:
A weird empty home page, but it's a running version of the Marketplace on my
local computer! Success! Although, I'm not sure it counts unless the unit
tests pass. Let's see...
$ CTRL-C # To shutdown the running fig instance
$ fig run --rm zamboni python ./manage.py test --noinput -s --logging-clear-handlers
Ran 4223 tests in 955.328s
FAILED (SKIP=26, errors=34, failures=17)
Hmm...that doesn't seem good. Apparently there is some work left to get the
tests to pass. I'll file bug 1082183 and keep moving. I know Travis-CI
will automatically run all the tests on any pull request so any changes I make
will still be tested -- depending on the changes you make this might be enough.
Step 4: Let's write some code
If I were new to the Marketplace I'd look at the Contributing docs and
follow the links there to find a bug to work on. However, I know Bug 989121 -
Upgrade django-csp has been assigned to me for six months so I'm going to
I'll avoid talking about the patch since I'm trying to focus on the how and
not the what in this post. The code is all in the /trees/ subdirectory under
wharfie, so I'll go there to write my code. A summary of the commands:
$ cd trees/zamboni
$ vi <files> # Be sure to include unit tests
$ git add <files>
$ git checkout -b 989121 # I name my branches after my bug numbers
$ git commit # commit messages must include the bug number
$ git push origin 989121
Now my changes are on Github! When I load the repository I committed to in my
browser I see a big green button at the top asking if I want to make a pull
I click the button and submit my pull request which notifies the
Marketplace developers that I'd like to merge the changes in. It will also
trigger the unit tests and notify me via email if any of them fail. Assuming
everything passes then I'm all done.
This flow is still a little rough around the edges, but for an adventurous
contributor it's certainly possible. It looks like Bug 1011198 - Get a
turn-key marketplace dev-env up and running is tracking progress on making
this easier so if you're interested in contributing feel free to follow along
and jump in when you're comfortable.
27 Oct 2014
This post is a celebration of finishing a migration off of Wordpress for this
site and on to flat files, built by Jekyll from Markdown files. I'm definitely
looking forward to writing more Markdown and fewer HTML tags.
90% of the work was done by jekyll-import to roughly pull my wordpress data
into Markdown files, and then I spent a late night with Vim macros and sed to
massage it the way I wanted it.
If all I wanted to do was have my posts up, I'd be done, but having the option
to accept comments is important to me and I wasn't comfortable using a platform
like Disqus because I didn't want to force people to use a 3rd party.
Since my posts only average one or two comments I ended up using a slightly
modified jekyll-static-comments to simply put a form on the page and email
me any comments (effectively, a moderation queue). If it's not spam, it's easy
to create a .comment file and it will appear on the site.
My original goal was to host this all on Github but they only allow a short list
of plugins and the commenting system isn't on there so I'll stick with my own
host for now.
Please let me know if you see anything broken.
24 Sep 2014
The AMO team is meeting this week to discuss road maps and strategies and
among the topics is our backlog of open bugs. Since mid-2011 there has averaged
around 1200 bugs open at any one time.
Currently any interaction with AMO's bugs is too time consuming: finding
good first bugs, triaging existing bugs, organizing a chunk of bugs to fix in a
milestone -- they all require interacting with a list of 1200 bugs, many of
which are years old and full of discussions by people who no longer contribute
to the bugs. The small chunks of time I (and others) get to work on AMO are
consumed by digging through these old bugs and trying to apply them to the
In an effort to get this list to a manageable size the AMO team is aggressively
triaging and closing bugs this week, hopefully ending the week with a realistic
list of items we can hope to accomplish. With that list in hand we can
prioritize the bugs, divide them into milestones, and begin to lobby for
Many of the bugs which are being closed are good ideas and we'd like to fix
them, but we simply need to be realistic about what we can actually do with the
resources we have. If you contribute patches to any of the bugs, please feel
free to reopen them.
Thanks for being a part of AMO.