In many parts of the world a reliable internet connection is hard to come by. Ethernet is essentially non-existent and WiFi is king. As any digital nomad can testify, this is ‘good enough’ for doing productive work.
Unfortunately not all WiFi connections work perfectly all the time. They’re fraught with unexpected problems including dropping out entirely, abruptly killing connections, and running into connection limits.
Thankfully with a little knowledge it is possible to regain productivity that would otherwise be lost to a flaky internet connection. These techniques are applicable to coffee shops, hotels, and other places with semi-public WiFi.
One idea that’s been bouncing around in my head for the last few years has been a laptop with an E-Ink display. I would have thought this would be a niche that had been carved out already, but it doesn’t seem that any companies are interested in exploring it.
I use my laptop in some non-traditional environments, such as outdoors in direct sunlight. Almost all laptops are abysmal in a scenario like this. E-Ink screens are a natural response to this requirement. Unlike traditional TFT-LCD screens, E-Ink panels are meant to be viewed with an abundance of natural light. As a human, I too enjoy natural light.
As many of my friends know, I’ve been spending the last few weeks in New Zealand. Primarily it’s been for a small holiday, but also in preparation inimitable Linux.conf.au 2015 conference. Leading up to that, one of the goals of my visit was to experience the variety of places across both islands.
As would be familiar to anybody who knows me, I’m always interested in new tech, especially when it’s running free software and portable enough to be in my every-day carry arsenal.
For the past month or so I’ve been looking at a few devices as a secondary to my laptop to carry with me. In a few weeks I’ll be joining those already there at third installment of Hackerbeach, on the Caribbean island of Dominica.
I wanted an embedded Linux system that could do everything. The scope of this device just kept getting bigger the more I thought about it.
A week ago I was fortunate enough to attend the latest code sprint of the Mercurial project. This was my second sprint with this project, and took away quite a bit from the meeting. The attendance of the sprint was around 20 people and took the form of a large group, with smaller groups splitting out intermittently to discuss particular topics. I had seen a few of the attendees before at a previous sprint I attended.
Joining me at the sprint were two of my colleagues Gregory Szorc (gps) and Mike Hommey (glandium). They took part in some of the serious discussions about core bugfixes and features that will help Mozilla scale its use of Mercurial. Impressively, glandium had only been working on the project for mere weeks, but was able to make serious contributions to the bundle2 format (an upcoming feature of Mercurial). Specifically, we talked to Mercurial developers about some of the difficulties and bugs we’ve encountered with Mozilla’s “try” repository due to the “tens of thousands of heads” and the events that cause a serving request to spin forever.
By trade I’m a sysadmin/DevOps person, but I also do have a coder hat that I don from time to time. Still though, the sprint was full of serious coders who seemingly worked on Mercurial full-time. There were attendees who had big named employers, some of whom would probably prefer that I didn’t reveal their identities here.
Unfortunately due to my lack of familiarity with a lot of the deep-down internals I was unable to contribute to some of the discussions. It was primarily a learning experience for me for both the process which direction-driving decisions are made for the project (mpm’s BDFL status) and all of the considerations that go into choosing a particular method to implement an idea.
That’s not to say I was entirely useless. My knowledge of systems and package management meant I was able to collaborate with another developer (kiilerix) to improve the Docker package building support, including preliminary work for building (un)official Debian packages for the first time.
I also learned about some infrequently used features or tips about Mercurial. For example, folks who come from a background of using git often complain about Mercurial’s lack of interactive rebase functionality. The “histedit” extension provides this feature. Much like many other features of Mercurial, this is technically “in core”, but not enabled by default. Adding a line in the “[extensions]” section your “hgrc” file such as “histedit =” enables it. It allows all the expected picking, folding, dropping, editing, or modifying commit messages.
Changeset evolution is another feature that’s been coming for a long time. It enables developers to safely modify history and be able to propagate those changes to any down/upstream clones. It’s still disabled by default, but is available as an extension. Gregory Szorc, a colleague of mine, has written about it before. If you’re curious you can read more about it here.
One of the features I’m most looking forward to is sparse checkouts. Imagine a la Perforce being able to only check out a subtree or subtrees of a repository using ‘–include subdir1/’ and –exclude subdir2/’ arguments during cloning/updating. This is what sparse checkouts will allow. Additionally, functionality is being planned to enable saved ‘profiles’ of subdirs for different uses. For instance, specifying the ‘–enable-profile mobile’ argument will allow a saved list of included and excluded items. This seems like a really powerful way of building lightweight build profiles for each different type of build we do. Unfortunately to be properly implemented it is waiting on some other code to be developed such as sharded manifests.
One last thing I’d like to tell you about is an upcoming free software project for Mercurial hosting named Kallithea. It was borne from the liberated code from the RhodeCode project. It is still in its infancy (version 0.1 as of the writing of this post), but has some attractive features for viewing repositories, such visualizations of changelog graphs, diffs, code reviews, a built-in editor, LDAP support, and even a JSON-RPC API for issue tracker integration.
In all I feel it was a valuable experience for me to attend that benefited both the Mercurial project and myself. I was able to lend some of my knowledge about building packages and familiarity with operations of large-scale hgweb serving, and was able to learn a lot about the internals of Mercurial and understand that even the deep core code of the project isn’t very scary.
I’m very thankful for my ability to attend and look forward to attending the next Sprint in the following year.
As the primary maintainers of the back end of the Try repository (in addition to the rest of Mercurial infrastructure) we are responsible for its care and feeding, making sure it is available, and a safe place to put your code before integration into trees.
Recently (the past few years actually) we’ve been experiencing that Mercurial has problems scaling to it’s activity. Here are some statistics for example:
24550 Mercurial heads (this is reset every few months)
Head count correlated with the degraded performance
4.3 GB in size, 203509 files without a working copy
One of the methods we’re attempting is to modify try so that each push is not a head, but is instead a bundle that can be applied cleanly to any mozilla-central tree.
By default when issuing a ‘hg clone’ from and to a local disk, it will create a hardlinked clone:
$ hg clone --time --debug --noupdate mozilla-central/ mozilla-central2/
linked 164701 files
listing keys for "bookmarks"
time: real 3.030 secs (user 1.080+0.000 sys 1.470+0.000)
$ du --apparent-size -hsxc mozilla-central*
These lightweight clones are the perfect environment to apply try heads to, because they will all be based off existing revs on mozilla-central anyway. To do that we can apply a try head bundle on top:
$ cd mozilla-central2/
$ hg unbundle $HOME/fffe1fc3a4eea40b47b45480b5c683fea737b00f.bundle
adding file changes
added 14 changesets with 55 changes to 52 files (+1 heads)
(run 'hg heads .' to see heads, 'hg merge' to merge)
$ hg heads|grep fffe1fc
This is great. Imagine if instead of ‘mozilla-central2’ this were named fffe1*, and could be stored in portable bundle format, and used to create repositories whenever they were needed. There is one problem though:
$ du -hsx --apparent-size mozilla-central*
Our lightweight 49MB copy has turned into 536MB. This would be fine for just a few repositories, but we have tens of thousands. That means we’ll need to keep them in bundle format and turn them into repositories on demand. Thankfully this operation only takes about three seconds.
I’ve written a little bash script to go through the backlog of 24,553 try heads and generate bundles for each of them. Here is the script and some stats for the bundles:
If this tooling works well I’d like to start using this as the future method of submitting requests to try. Additionally if developers wanted, I could create a Mercurial extension to automate the bundling process and create a bundle submission engine for try.
At Mozilla we use Mercurial for Firefox development. We have several repositories/trees that are used depending on where the code should be. If a developer wishes to test the code they have been developing, they can submit it to a Mercurial repository called ‘try‘, since running our entire test suite is not feasible on developer machines.
We have quite a bit of infrastructure around this including Tinderbox Pushlog (TBPL) and more. This post deals with the infrastructure and problem we face while trying to scale the ‘try’ repository.
A few statistics:
The try repository currently has 17943 heads. These heads are never removed.
The try repository is about 3.6 GB in size.
Due to Mercurial’s on-wire HTTP protocol, this number of heads causes HTTP cloning to fail
There are roughly 81000 HTTP requests to try per day
To fix problems (mentioned below), the try repository is deleted and re-cloned from mozilla-central every few months
There are a number of problems associated with such a repository. One particularly nasty one has been present through several years of Mercurial development, and has been tricky in that it is seemingly unreproducible. The scenario is something like:
User ‘hg push’es some changes to a new head onto try
The push process takes a long time (sometimes between 10 minutes and hours)
A developer could issue an interrupt signal (ctrl+C) which causes the client to gracefully hang up and exit (his typically has no effect on the server
Subsequent pushes will hang with something similar to ‘remote: waiting for lock on repository /repo/hg/mozilla/try/ held by ‘hgssh1.dmz.scl3.mozilla.com:23974’
When this happens a hg process is running on the server has the following characteristics:
A ‘hg serve’ process runs single-threaded using 100% CPU
strace-ing and ltrace-ing reveal that the process is not making any system calls or external library calls
perf reveals that the process is spending all of its time inside some ambiguous python function
pdb yields that the process is spending all of its time in a function that (along some point in the stack trace) is going through ancestor calculations
The process will eventually exit cleanly
As operators there is nothing we can do that to alleviate the situation once the repository gets in this state. We simply inform developers and monitor the situation.
There have been several ideas on ways to alleviate the problem:
Periodically reset ‘try’. This is considered bad because 1) it loses history, and 2) it is disruptive to developers, who might have to re-submit try jobs again
Reset try on the SSH servers, but keep old try repositories on the HTTP servers. This has the potential to create unforeseen problems of growing these repositories even further on the HTTP servers. If reset (staggered from SSH server resets) this will remove unforeseen problem potential, but still lose history.
Creating bundle files out of pushes to ‘try’, then hosting these in an accessible location (S3, http webroot, etc). I will detail this method in a future blog post.
As of now though, try will periodically need to be reset as a countermeasure to the hangs mentioned in this post. Getting a reproducible test case might allow us to track down a bug or inefficiency in Mercurial to fix this problem after all. If you’d like to help us with this, please ping fubar or me (bkero) on irc.mozilla.org.
This post deals with the technical challenges we’ve encountered while trying to establish reliable communications while staying in Rural Kenya. Some background information is necessary to understand the efforts we’ve gone through to remain connected.
This year the prestigious Hacker Beach event is taking place on the island of Lamu off the eastern coast of Kenya. The island is serviced by a single UMTS tower located above the hospital in the main town of Lamu City. However, our accommodation is on the other side of the island.
Our accommodation had a previously installed directional antenna on the roof to provide internet access. Unfortunately the access was very slow, with only 14% signal strength. This was complicated by strong winds blowing against antenna, causing it to be pointed in a wrong direction. This further reduced the cellular reception, sometimes making it disconnect completely.
An additional problem was that WiFi was served by a single Cradlepoint MBR1000 router in a corner of the fort, making it inaccessible through the impenetrably thick fort walls. This meant we were limited to camping in the upstairs dining hall, which worked well enough due to all the seating, but there was some desire to branch out to work from other areas of the fort, such as the knights-of-the-round-table-esque meeting room.
For a group of 18 hackers, this level of connectivity was unacceptable. Many of us were making excursions into town to work at cafes with better reception. This was a problem because it threatened to undermine the spontaneous collaborative nature of Hacker Beach. The way we saw it there were two problems to fix:
Reception of the antenna was abysmal. Was this an inherent problem with the location?
WiFi reception was limited to only one corner of the house. Ideally the house should have WiFi everywhere.
We attempted to fix this by purchasing local SIM cards and installing them in portable WiFi Hotspot devices. Oddly enough we were able to receive some 3G reception if the devices were placed in some rather random areas of the fort. Unfortunately the connectivity of these devices wasn’t reliable enough for full-time hacking. So we began efforts in earnest to fix the connectivity problem.
We determined that the most appropriate solution to the WiFi problem was to employ PowerLine Ethernet adapters throughout the Fort to distribute connectivity. Simply repeating wireless signal was not a good option because of the lack of strategic locations to place wireless repeaters. The thick walls meant that the signal would be stopped between floors as well. We took a gamble and assumed that most outlets would be on the same power phase (if circuits are on different phases the PowerLine throughput will be severely limited, or likely not work at all). Since we had some new hackers approaching in a few days shipping was out of the question. Thankfully we were able t source some units in Athens, which (after some begging) our gracious friends were kind enough to pick up and bring for us.
The pairing part was easy, with WiFi SSID/password being copied using WPS. After pairing the devices could be moved anywhere in the fort to increase coverage. We installed two devices which are able to blanket the whole fort with connectivity. Problem solved.
Next was a trickier bit that required more calibration and special equipment. While inspecting the old antenna we found that the connectors had been tortured by the elements for several years. This meant that the antenna pigtail connectors were rusted, which was likely causing reception issues. Another problem was that the pigtail was being run through a window, which was then closing on it. We feared this was crushing the cable, which could have easily caused our antenna to become useless.
There were several more hackers arriving from Nairobi in a few days, so we asked them to bring some antenna gear to hopefully help improve our connectivity. In total a questionably-EDGE amplifier, directional antenna, and some cabling was delivered when the hackers arrived early yesterday morning. It didn’t take long for us to tear it all open and start installing it.
Equipped with a laptop, an antenna, and a downstairs accomplice we disconnected the old antenna and threw a new line down to connect to the 3G modem. Next I had opened the router’s modem status page to measure signal strength while another hacker determined the direction the antenna should face to get the best reception. Our best direction was pressed against the old antenna; the people who installed the last one must have known what they were doing.
Unfortunately we were only armed with my multitool which meant that proper mounting was going to be impossible. We tried wrenching the existing nuts that held the antenna in place, but they proved to be well stuck with a decade of rust and generally brutal African elements. Not even cooking oil (our improvised WD-40) would help loosen the offending nuts. Ultimately we ended up doing a bodge job to keep the antenna in place. One of the hackers had brought string with him, which we used to tie the new antenna’s base plate to the old antenna. This worked surprisingly well, although is a horrifyingly temporarily solution. The string will not stand up to more than a few days outside here. Next we plan to source some tools locally and perform a permanent installation of the antenna.
With the old antenna the signal strength would consistently be about 14%, which resulted in throughput of about 200 kbit. After out new antenna was installed and calibrated we were able to see signal strength of up to 80%, which gave us upwards of 1800 kbit of throughput with consistent pings of about 250 ms. Hooray!
After applying liberal traffic shaping on the router we are now able to comfortably surf the internet, download packages, and use IRC.
I’ve been as terrible about updating this as I expected to be. Nevertheless, I am struck with inspiration (or maybe it’s just energy from coffee), so another post must be written!
For the next week (and the previous week) I’m spending time in Paris. This turned out to be largely a convenient set of circumstances, since I had an excellent experience when I was here two weeks ago, and I wished I could spend more time here. Continue reading “Day 51”