1. DNSSEC-trigger on Arch Linux without Network Manager

    Many ISPs have pretty shitty DNS resolvers. Sometimes they act as fake DNS, sometimes they are just poorly configured. And it's even worse on hotspots with captive portals.

    Part of the solution is to use DNSSEC… but it's not that easy. Of course, Unbound can be installed in a matter of minutes, but it's not necessary a good idea to always use a full resolver instead of the ISP caching resolvers, and it can break on hotel networks as it can't resolve the (bad) hostname of the captive portal.

    A nice solution is to use DNSSEC-trigger:

    Dnssec-trigger reconfigures the local unbound DNS server. This unbound DNS server performs DNSSEC validation, but dnssec-trigger will signal it to use the DHCP obtained forwarders if possible, and fallback to doing its own AUTH queries if that fails, and if that fails prompt the user via dnssec-trigger-applet the option to go with insecure DNS only.

    Tagged as : dns dnssec unbound linux
  2. Resource control with systemd

    My dev environment requires CouchDB, ElasticSearch, Redis, a Django dev server, and a web browser. Usually I even need two browsers: Firefox and Chromium. And, of course, I run Emacs and a few instances or urxvt.

    This takes a lot of resources. And I obviously don't want my laptop to lag because it's swapping or because any service is hogging the CPU.

    Since the biggest offender are ElasticSearch and CouchDB, it's really tempting to use systemd to limit how much memory they can use. And it turns out that it's extremely easy to do so:

    # cat > /etc/systemd/system/elasticsearch.service <<EOF
    .include /usr/lib/systemd/system/elasticsearch.service
    
    [Service]
    CPUShares=512
    MemoryLimit=1G
    EOF
    # systemctl daemon-reload
    # systemctl restart elasticsearch
    

    Done! And it's easy enough to do the same with CouchDB or any other memory-eating service. But if I give each of these services 1G of memory, they can still collectively use too much memory. And if I give them less, some of them will quickly run out of memory. But it turns out that this is really easy to solve using systemd again: you just have to put all the services in the same CGroup, and limit the resources for the whole CGroup.

    Using recent versions of systemd, this is done using slices. (With older versions you could deal with CGroups directly; this is not possible anymore, or is at least strongly discouraged, as newer versions use a new, different CGroups hierarchy.) The principle is almost the same: we create a new slice, assign services to this slice, and limit the resources for this slice. Again this can be done in a few seconds:

    # cat > /etc/systemd/system/limits.slice <<EOF
    [Unit]
    Description=Limited resources Slice
    DefaultDependencies=no
    Before=slices.target
    
    [Slice]
    CPUShares=512
    MemoryLimit=2G
    EOF
    # for SERVICE in couchdb elasticsearch redis; do cat > /etc/systemd/system/$SERVICE.service <<EOF
    .include /usr/lib/systemd/system/$SERVICE.service
    
    [Service]
    Slice=limits.slice
    EOF
    done
    # systemctl daemon-reload
    # systemctl restart couchdb elasticsearch redis
    

    And you can then check that it works using systemctl status:

    # systemctl status elasticsearch.service
    elasticsearch.service - ElasticSearch
       Loaded: loaded (/etc/systemd/system/elasticsearch.service; disabled)
       Active: active (running) since Wed 2013-12-18 11:08:07 CET; 35min ago
     Main PID: 28784 (java)
       CGroup: /limits.slice/elasticsearch.service
               └─28784 /bin/java -Xms256m -Xmx1g -Xss256k -Djava.awt.headless=true ...
    

    More docs are available with man systemd.resource-control and man systemd.slice. Since slices are quite new, there are not many examples of their use yet, but I'm sure their usage will increase in the future.

    Tagged as : systemd cgroups linux
  3. New blog engine… again

    Another year (or two), another blog engine :) I've now switched to Pelican, a static blog engine written in Python. So far so good: it works well, is fast, and there's a great community working on a lot of nice plugins and themes (including this one).

    Now let's hope I'll write a little more :)

    Tagged as : blog pelican
  4. Optimizing JPEG pictures

    I recently realized that during our vacation in London, my girlfriend and me took about 4 GB of pictures. Since I currently have 30 GB of storage space on rsync.net to do my backups, 4 GB is quite a lot. Fortunately, there are several solutions to reduce their size.

    The first one would be to resize them or to increase their compression ratio / decrease their quality. But I don't want such a lossy method: I want to keep my pictures at the best quality available so I can print them in high resolution if I want to.

    The other solution is to "optimize" them. Once again, several methods: removal of unnecessary data (EXIF markers and other metadata), conversion to progressive JPEG, or Huffman table optimization. Since I don't want to lose metadata (mostly because I add many tags to my pictures in Shotwell and they are stored in these metadata), I only use the other two methods.

    Most of my photos are taken with my camera (Panasonic Lumix FZ100) or with my girlfriend's (Nikon Coolpix S8000).

    I first tried to use jpegoptim to do this task. It only optimizes Huffman tables, and it does it well. However, this tool only supports EXIF and IPTC metadata, and on pictures taken with my camera, Shotwell stores its tags in the XMP "Subject" marker. And jpegoptim erases XMP markers when processing them, resulting in many lost tags...

    So I tried to use jpegtran to do the same. It also supports progressive JPEG, and is apparently much better at not destroying metadata when not asked to do so :) Here is the command I use to optimize my pictures with it:

    parallel -u 'echo {}; jpegtran -optimize -progressive -perfect -copy all -outfile {}.tran {} && mv {}.tran {}' ::: *.JPG
    

    parallel is GNU Parallel, a tool which is very useful to speed things up by using the 16 cores of my work PC to do the job :)

    Using jpegtran this way, I reduced the size of my "London" folder from 4.0 GB to 3.5 GB, i.e. a 12.5% reduction with absolutely no quality loss. Not bad!

    Now, some funny things I noticed while doing this:

    • the Lumix FZ100 does not do any optimization to its JPEG files: jpegtran always reduced them by at least 13%, sometimes more. It also create some EXIF and XMP markers in its files, but no IPTC tag.

    • the Coolpix S8000 does a much better job at optimizing its files: jpegtran could only reduce their size by 0.6 to 0.8%, 1.2% at best. It creates EXIF, XMP and IPTC markers.

    • when Shotwell stores tags directly into pictures, it will use the IPTC "Keywords" marker only if there already are IPTC data in the file. This is why jpegoptim lost tags on pictures taken with my camera: the FZ100 only added XMP markers, which were then wiped out by jpegoptim. For pictures taken with the S8000, tags were stored both in XMP and IPTC markers, so when the XMP ones were removed, Shotwell still took the IPTC version into account.

      Not sure if it's a bug or a feature...

    Tagged as : jpeg shotwell howto
  5. iPhone tracking

    There has recently been a lot of noise about a tool made by Alasdair Allan and Pete Warden: every iPhone is tracking its owner's movements all the time. For the record, the existence of this database on each iPhone with iOS 4.x has been documented for several months already. And it's not really surprising... Remember Eben Moglen's talk at FOSDEM 2011?

    Now, time for a confession: I have an iPhone too. It's a nice, mostly useless device, but it becomes quite fun to use once you jailbreak it. And since I jailbroke mine, I can have fun with it. Now, let's have fun with this geolocation database.

    Accessing the geolocation database...

    First, ssh into your jailbroken iPhone as root (or mount it with ifuse: ifuse --root /path/to/mountpoint). The DB is stored in the /var/root/Library/Caches/locationd folder and is named consolidated.db, just as explained on the iPhone Tracker page. On my phone, it's a 5.4 MB file. You can copy it to your computer (using scp, rsync, or just cp if you're using ifuse).

    If you're curious, you can then investigate the content of this file using sqlite3 or a GUI such as sqliteman. Here are a few interesting tables: celllocation, celllocationlocal, and wifilocation.

    The first one is the one used by Alasdair Allan and Pete Warden in their "iPhone Tracker" tool. On my phone, there are 2,624 records in this table (timestamp, latitude, longitude, altitude, plus some other columns), the oldest one are 2.5 months old (February 5th -- FOSDEM!). It would seem that these records indicate the positions of cell towers rather than your own, but this can only be guessed since you can't have a look at the iOS source code...

    The second table has a similar structure, but apparently a different content. I did not investigate further (yet).

    The last one is a little different: wifilocation. It stores the position of a lot of MAC addresses (with, of course, an associated timestamp). I don't know if these are the MAC addresses of some wireless access points or the MAC addresses of wireless clients, but given that on my phone there are 35,770 records since February 6th, I doubt these are just access points.

    ...for fun and profit

    The iPhone Tracker seems to be a very nice program, but it's for Macs only. So I hacked a little Python script that can read such a database and produce a KML file that can be then viewed using Google Earth.

    The script is available here: iphone-tracker.py. No dependencies except for Python 3. Very simple to use:

    ./iphone-tracker.py path/to/consolidated.db > output.kml
    

    The result can then be opened in Google Earth. The positions are grouped by day to avoid having 2500+ points overlapping on a map.

    It can be seen, as described by the researchers who first found out about this, that the stored positions are far from being precise. The recorded timestamps are very approximate too. But the simple fact that so many data are stored about one's location is really concerning.

    Several months ago, someone also made a web viewer where you can upload you database file and see the result in Google Maps (in French).

    What now?

    As far as I know, Apple has not made a public statement about this little controversy yet. But I'm really eager to see what they will tell about it -- if they care to tell something about it.

    I'm also deeply concerned about the wifilocation table of this database, which, in some aspects, is much worse than the celllocation table (no need for your phone to store that: your network operator already has the data, and it's probably far easier for your government to ask them than to get access to your phone).

    If it contains geolocation data of wireless access points, this could cause problems similar to what Google encountered in Germany, when Google Cars were gathering data about wireless networks in addition to the Google Street View pictures.

    But if the wifilocation table actually contains the last seen location of wireless clients, it could mean that your phone can be used to prove that you were close to a specific person (identified by his phone wireless MAC address) at a specific moment. And, for some persons, in some countries, this is a serious reason to worry.

    If you wish to disable this database on your (jailbroken) iPhone, you may use this workaround.

    Tagged as : iphone gps python3 apple

Page 1 / 5