Planet Collabora

May 25, 2016

Olivier Crête

GStreamer Spring Hackfest 2016

After missing the last few GStreamer hackfests I finally managed to attend this time. It was held in Thessaloniki, Greece’s second largest city. The city is located by the sea side and the entire hackfest and related activities were either directly by the sea or just a couple blocks away.

Collabora was very well represented, with Nicolas, Mathieu, Lubosz also attending.

Nicolas concentrated his efforts on making kmssink and v4l2dec work together to provide zero-copy decoding and display on a Exynos 4 board without a compositor or other form of display manager. Expect a blog post soon  explaining how to make this all fit together.

Lubosz showed off his VR kit. He implemented a viewer for planar point clouds acquired from a Kinect. He’s working on a set of GStreamer plugins to play back spherical videos. He’s also promised to blog about all this soon!

Mathieu started the hackfest by investigating the intricacies of Albanian customs, then arrived on the second day in Thessaloniki and hacked on hotdoc, his new fancy documentation generation tool. He’ll also be posting a blog about it, however in the meantime you can read more about it here.

As for myself, I took the opportunity to fix a couple GStreamer bugs that really annoyed me. First, I looked into bug #766422: why glvideomixer and compositor didn’t work with RTSP sources. Then I tried to add a ->set_caps() virtual function to GstAggregator, but it turns out I first needed to delay all serialized events to the output thread to get predictable outcomes and that was trickier than expected. Finally, I got distracted by a bee and decided to start porting the contents of to Markdown and updating it to the GStreamer 1.0 API so we can finally retire the old website.

I’d also like to thank Sebastian and Vivia for organising the hackfest and for making us all feel welcomed!

GStreamer Hackfest Venue

by ocrete at May 25, 2016 08:43 PM

May 17, 2016

Gustavo Padovan

Collabora contributions to Linux Kernel 4.6

Linux Kernel 4.6 was released this week, and a total of 9 Collabora engineers took part in its development, Collabora’s highest number of engineers contributing to a single Linux Kernel release yet. In total Collabora contributed 42 patches.

As part of Collabora’s continued commitment to further increase its participation to the Linux Kernel, Collabora is actively looking to expand its team of core software engineers. If you’d like to learn more, follow this link.

Here are some highlights of Collabora’s participation in Kernel 4.6:

Andrew Shadura fixed the number of buttons reported on the Pemount 6000 USB touchscreen controller, while Daniel Stone enabled BCM283x familiy devices in the ARM multi_v7_defconfig and Emilio López added module autoloading for a few sunxi devices.

Enric Balletbo i Serra added boot console output to AM335X(Sitara) and OMAP3-IGEP and fixed audio codec setup on AM335X using the right external clock. Martyn Welch added the USB device ID for the GE Healthcare cp210x serial device and renamed the reset reason of the Zodiac Watchdog.

Gustavo Padovan cleaned up the Android Sync Framework on the staging tree for further de-staging of the Sync File infrastructure, which will land in 4.7. Most of the work was removing interfaces that won’t be used in mainline. He also added vblank event support for atomic commits in the virtio DRM driver.

Peter Senna improved an error path and added some style fixes to the sisusbvga driver. While Sjoerd Simons enabled wireless on radxa Rock2 boards, fixed an issue withthe brcmfmac sdio driver sometimes timing out with a false positive and fixed some issues with Serial output on Renesas R-Car porter board.

Tomeu Vizoso changed driver_match_device() to return errors and in case of -EPROBE_DEFER queue the device for deferred probing, he also provided two fixes to Rockchip DRM driver as part of his work on making intel-gpu-tools work on other platforms.

Following is a list of all patches submitted by Collabora for this kernel release:

Andrew Shadura (1):

Daniel Stone (1):

Emilio López (4):

Enric Balletbo i Serra (3):

Gustavo Padovan (17):

Martyn Welch (2):

Peter Senna Tschudin (4):

Sjoerd Simons (6):

Tomeu Vizoso (4):

by Gustavo Padovan at May 17, 2016 03:39 PM

April 21, 2016

Tomeu Vizoso

Validating changes to KMS drivers with IGT

New DRM drivers are being added to almost each new kernel release, and because the mode setting API is so rich and complex, bugs do slip in that translate to differences in behaviour between drivers.

There have been previous attempts at writing test suites for validating changes and preventing regressions, but they have typically happened downstream and focused on the specific needs of specific products and limited to one or at most a few of different hardware platforms.

Writing these tests from scratch would have been an enormous amount of work, and gathering previous efforts and joining them wouldn't be much worth it because they were written using different test frameworks and in different programming languages. Also, there would be great overlap on the basic tests, and little would remain of the trickier stuff.

Of the existing test suites, the one with most coverage is intel-gpu-tools, used by the Intel graphics team. Though a big part is specific to the i915 driver, what uses the generic APIs is pretty much driver-independent and can be made to work with the other drivers without much effort. Also, Broadcom's Eric Anholt has already started adding tests for IOCTLs specific to the VideoCore-IV driver.

Collabora's Micah Fedke and Daniel Stone had added a facility for selecting DRM device files other than i915's and I improved the abstraction for creating buffers so it works for drivers without GEM buffers. Next I removed a bunch of superfluous dependencies on i915-only stuff and got a useful subset of tests to run on a Radxa Rock2 board (with the Rockchip 3288 SoC). Around half of these patches have been merged already and the other half are awaiting review. Meanwhile, Collabora's Robert Foss is running the ported tests on a Raspberry Pi 2 and has started sending patches to account for its peculiarities.

The next two big chunks of work are abstracting CRC checksums of frames (on drivers other than i915 this could be done with Google's Chamelium or with a board similar to Numato Opsis), and the buffer management API from libdrm that is currently i915-only (bufmgr). Something that will have to be dealt with in the future is abstracting the submittal of specific loads on the GPU as that's currently very much driver-specific.

Additionally, I will be scheduling jobs in our LAVA instance to run these tests on the boards we have in there.

Thanks to Google for sponsoring my time, to the Intel OTC folks for their support and reviews, and to Collabora for sponsoring Robert's, Micah's and Daniel's time.

by Tomeu Vizoso ( at April 21, 2016 01:02 PM

April 16, 2016

Tollef Fog Heen

Blog moved, new tech

I moved my blog around a bit and it appears that static pages are now in favour, so I switched to that, by way of Hugo. CSS and such needs more tweaking, but it’ll make do for now.

As part of this, RSS feeds and such changed, if you want to subscribe to this (very seldomly updated) blog, use

April 16, 2016 08:42 PM

March 26, 2016 - Robert Foss

Coverpage template

Alt text

Coverpage is a single-page landing page built to showcase an idea or a product. To allow interested parties to get notified of updates, the template has mailchimp subscription integration.

A live version of the site can be found at


git clone

GitHub hosting

This template was built with the explicit intention of having it be hosted at GitHub in a gh-pages branch. Therefore it includes a Makefile for pushing copy of the current design to a gh-pages branch.

by Robert Foss at March 26, 2016 11:34 PM

March 16, 2016

Gustavo Padovan

Collabora contributions to Linux Kernel 4.5

Linux Kernel 4.5 was released earlier this week, and once again Collabora engineers played a role in its development. In addition to their current projects, seven Collabora engineers contributed a total of 33 patches to the new Kernel.

As part of its continued committment to further increase ts participation to the Linux Kernel, Collabora is looking to expand its team of core software engineers. If you’d like to learn more, follow this link.

Here are some highlights of Collabora’s participation in Kernel 4.5:

Daniel Stone improved i915 runtime WARN() messages and fixed an important issue in the component subsystem when component_add() fails. Danilo Cesar made the DRM Docbook ready for Markdown text.

Gustavo Padovan improved the pm_runtime management on the drm/exynos driver and started work on de-staging the Android Sync Framework. On Rockchip, Sjoerd Simons enabled IR receiver to RK3288 Radxa Rock 2 Square, added multi_v7_defconfig for Rockchip audio and enabled RK3288 SPDIF clocks to change their parent. On the net side, Sjoerd added a patch to turn carrier off on phy attach to avoid unknown states and another patch to add ethernet0 alias for the RK3288 to help u-boot find this device-node.

During his brief time with us at Collabora, Heiko Stübner added the dts file for the veyron-brain board, a shutdown callback to platform variant dwc2 devices for a special clock handling to avoid getting stuck on the reboot/poweroff process and multi_v7_defconfig support to Rockchip’s io-domain driver, crypto module and rk808 clkout module. He also enabled support for veyron minnie touchscreen, adjusted temperature limits on veyron-speedy and fixed the edp-24m clock to be associated to the internal 24MHz oscillator all the time.

Martyn Welch added a driver for the Zodiac Aerospace RAVE Watchdog Processor, while Tomeu Vizoso added a device_is_bound() helper function and setter for dev.pm_domain that comes with extra checkings. Tomeu also added a patch to allow USB devices to remain runtime-suspended when sleeping and another patch to optimize sleep by going direct_complete if driver has no prepare and PM callbacks. Lastly, Tomeu also fixed a freq issue on Tegra callback.

Following is a list of all patches submitted by Collabora for this kernel release:

Daniel Stone (3):

Danilo Cesar Lemes de Paula (1):

Gustavo Padovan (9):

Heiko Stübner (8):

Martyn Welch (2):

Sjoerd Simons (5):

Tomeu Vizoso (5):

by Gustavo Padovan at March 16, 2016 06:17 PM

March 01, 2016

Pekka Paalanen

Wayland has been accepted as a Google Summer of Code organization

Now is a high time to start discussing what you might want to do, for both student candidates and possible mentors.

Students, have a look at our project idea examples to get a feeling of what kind of projects you could propose. First you will need to contribute at least a small but significant patch to show that you understand the workflow, we have put some first task ideas together.

There are our application instructions for students. Of course all the pages are reachable from the Wayland GSoC wiki page and also the Wayland organization page.

If you want to become a mentor, please contact me or Kat, the contact details are on the Wayland GSoC wiki page.

Note, that students can also apply under the X.Org Foundation organization since Wayland is within their scope too and they also have other excellent graphics project ideas. You are welcome to submit your Wayland proposals to both projects.

by pq ( at March 01, 2016 11:58 AM

February 26, 2016

Andrew Shadura

Phulud? No, Phulad.

If you bought an North India travel guide by Vanessa Betts and Victoria McCulloch, and tried to figure out where is ‘Phulud’ and how to get there from Deogarh (and how to get to Deogarh itself from Udaipur), don’t waste your time googling, as it’s not Phulud, but Phulad.

It does seem that the narrow gauge journey from Deogarh to Phulad is indeed beautiful:

Meanwhile, I have also found this very interesting post by Mary Anne Erickson: Impressions of India: Udaipur to Deogarh. I’m not yet sure we’re going to follow that route, but it seems promising.

P.S. Despite what the guide said about the airport in Jaisalmer, which is due to open in 2013, according to the reports, it is still not open, so we have to skip that city. Oh well.

EDIT: The guide actually also says, on page 10: Fly from Jaisalmer back to Delhi to connect with your flight home. Fact checking? No, who needs that? :)

February 26, 2016 04:45 PM

February 16, 2016

Pekka Paalanen

A programmer's view on digital images: the essentials

How is an uncompressed raster image laid out in computer memory? How is a pixel represented? What are stride and pitch and what do you need them for? How do you address a pixel in memory? How do you describe an image in memory?

I tried to find a web page for dummies explaining all that, and all I could find was this. So, I decided to write it down myself with the things I see as essential.

An image and a pixel

Wikipedia explains the concept of raster graphics, so let us take that idea as a given. An image, or more precisely, an uncompressed raster image, consists of a rectangular grid of pixels. An image has a width and height measured in pixels, and the total number of pixels in an image is obviously width×height.

A pixel can be addressed with coordinates x,y after you have decided where the origin is and which way the coordinate axes go.

A pixel has a property called color, and it may or may not have opacity (or occupancy). Color is usually described as three numerical values, let us call them "red", "green", and "blue", or R, G, and B. If opacity (or occupancy) exists, it is usually called "alpha" or A. What R, G, B, and A actually mean is irrelevant when looking at how they are stored in memory. The relevant thing is that each of them is encoded with a certain number of bits. Each of R, G, B, and A is called a channel.

When describing how much memory a pixel takes, one can use units of bits or bytes per pixel. Both can be abbreviated as "bpp", so be careful which one it is and favour more explicit names in code. Also bits per channel is used sometimes, and channels can have a different number of bits per pixel each. For example, rgb565 format is 16 bits per pixel, 2 bytes per pixel, 5 bits per R and B channels, and 6 bits per G channel.

A pixel in memory

Pixels do not come in arbitrary sizes. A pixel is usually 32 or 16 bits, or 8 or even 1 bit. 32 and 16 bit quantities are easy and efficient to process on 32 and 64 bit CPUs. Your usual RGB-image with 8 bits per channel is most likely in memory with 32 bit pixels, the extra 8 bits per pixel are simply unused (often marked with X in pixel format names). True 24 bits per pixel formats are rarely used in memory because trading some memory for simpler and more efficient code or circuitry is almost always a net win in image processing. The term "depth" is often used to describe how many significant bits a pixel uses, to distinguish from how many bits or bytes it occupies in memory. The usual RGB-image therefore has 32 bits per pixel and a depth of 24 bits.

How channels are packed in a pixel is specified by the pixel format. There are dozens of pixel formats. When decoding a pixel format, you first have to understand if it is referring to an array of bytes (particularly used when each channel is 8 bits) or bits in a unit. A 32 bits per pixel format has a unit of 32 bits, that is uint32_t in C parlance, for instance.

The difference between an array of bytes and bits in a unit is the CPU architecture endianess. If you have two pixel formats, one written in array of bytes form and one written in bits in a unit form, and they are equivalent on big-endian architecture, then they will not be equivalent on little-endian architecture. And vice versa. This is important to remember when you are mapping one set of pixel formats to another, between OpenGL and anything else, for instance. Figure 1 shows three different pixel format definitions that produce identical binary data in memory.

Figure 1. Three equivalent pixel formats with 8 bits for each channel. The writing convention here is to list channels from highest to lowest bits in a unit. That is, abgr8888 has r in bits 0-7, g in bits 8-15, etc.

It is also possible, though extremely rare, that architecture endianess also affects the order of bits in a byte. Pixman, undoubtedly inheriting it from X11 pixel format definitions, is the only place where I have seen that.

An image in memory

The usual way to store an image in memory is to store its pixels one by one, row by row. The origin of the coordinates is chosen to be the top-left corner, so that the leftmost pixel of the topmost row has coordinates 0,0. First there are all the pixels of the first row, then the second row, and so on, including the last row. A two-dimensional image has been laid out as a one-dimensional array of pixels in memory. This is shown in Figure 2.

Image layout in memory.
Figure 2. The usual layout of pixels of an image in memory.
There are not only the width×height number of pixels, but each row also has some padding. The padding area is not used for storing anything, it only aligns the length of the row. Having padding requires a new concept: image stride.

Padding is often necessary due to hardware reasons. The more specialized and efficient hardware for pixel manipulation, the more likely it is that it has specific requirements on the row start and length alignment. For example, Pixman and therefore also Cairo (image backend particularly) require that rows are aligned to 4 byte boundaries. This makes it easier to write efficient image manipulations using vectorized or other instructions that may even process multiple pixels at the same time.

Stride or pitch

Image width is practically always measured in pixels. Stride on the other hand is related to memory addresses and therefore it is often given in bytes. Pitch is another name for the same concept as stride, but can be in different units.

You may have heard rules of thumb that stride is in bytes and pitch is in pixels, or vice versa. Stride and pitch are used interchangeably, so be sure of the conventions used in the code base you might be working on. Do not trust your instinct on bytes vs. pixels here.

Addressing a pixel

How do you compute the memory address of a given pixel x,y? The canonical formula is:
pixel_address = data_begin + y * stride_bytes + x * bytes_per_pixel.
The formula stars with the address of the first pixel in memory data_begin, then skips to row y while each row is stride_bytes long, and finally skips to pixel x on that row.

In C code, if we have 32 bit pixels, we can write
uint32_t *p = data_begin;
p += y * stride_bytes / sizeof(uint32_t);
p += x;
Notice, how the type of p affects the computations, counting in units of uint32_t instead of bytes.

Let us assume the pixel format in this example is argb8888 which is defined in bits of a unit form, and we want to extract the R value:
uint32_t v = *p;
uint8_t r = (v >> 16) & 0xff;
Finally, Figure 3 gives a cheat sheet.

Figure 3. How to compute the address of a pixel.

Now we have covered the essentials, and you can stop reading. The rest is just good to know.

Not everyone has the "right" way up

In the above we have assumed that the image origin is the top-left corner, and rows are stored top-most first. The most notable exception to this is the OpenGL API, which defines image data to be in bottom-most row first. (Traditionally also BMP file format does this.)

Multi-planar formats

In the above, we have talked about single-planar formats. That means that there is only a single two-dimensional array of pixels forming an image. Multi-planar formats use two or more two-dimensional arrays for forming an image.

A simple example with an RGB-image would be to store R channel in the first plane (2D-array) and GB channels in the second plane. Pixels on the first plane have only R value, while pixels on the second plane have G and B values. However, this example is not used in practice.

Common and real use cases for multi-planar images are various YUV color formats. Y channel is stored on the first plane, and UV channels are stored on the second plane, for instance. A benefit of this is that e.g. the UV plane can be sub-sampled - its resolution could be only half of the plane with Y, saving some memory.

Tiled formats

If you have read about GPUs, you may have heard of tiling or tiled formats (tiled renderer is a different thing). These are special pixel layouts, where an image is not stored row by row but a rectangular block by block. Tiled formats are far too wild and various to explain here, but if you want a taste, take a look at Nouveau's documentation on G80 surface formats.

by pq ( at February 16, 2016 02:23 PM

February 07, 2016 - Robert Foss

ESP8266 APA102 Bulb

Alt text

The product of this project is a WiFi connected LED bulb. Every LED on this bulb is individually programmable over the WiFi, by simply sending UDP packets to the bulb.

Software and hardware sources

git clone

This project consists of 3 parts: the software running on the led bulb, the software running on some host computer and the hardware.


The firmare is based on the NodeMCU firwmare for the ESP8266. It's running the APA102 LED driver and the enduser setup module, which I've written about previously.

Additionally it's running 3 lua scripts that deal with different aspects.

There's init.lua which makes sure we're connected to a WiFi.

udp_listener.lua receives UDP packets and then sends forwards that data to the APA102 strips.

And lastly udb_broadcast.lua which periodically broadcasts a heartbeat for this LED bulb to signal that it is alive and well.

Host application

The current (as of the publish date of this post) incarnation of the host application listens for bulbs that are alive on the hosts network. If a bulb is found is will be added to the list of bulbs to be animated. All animations are simple and sinusoidal and only use the time a bulb has been 'alive' as an input for the animation.


The hardware is based around the ESP8266 WiFi IC and the APA102 SPI LED IC.

The flavor of ESP8266 used in this project is the ESP12-F module, since it the latest module available with the integrated antenna form factor.

APA102 was chosen instead of the much more common WS2812B chip, since it uses a SPI like protocol which isn't timing sensitive and also does not require external capacitors at next to each LED.

v3.1 Schematic

Alt text

v2 3D Model

Alt text

Assembled v2 hardware

Alt text Alt text Alt text

by Robert Foss at February 07, 2016 09:46 PM

February 06, 2016

Andrew Shadura

Community time at Collabora

I haven’t yet blogged about this (as normally I don’t blog often), but I joined Collabora in June last year. Since then, I had an opportunity to work with OpenEmbedded again, write a kernel patch, learn lots of things about systemd (in particular, how to stop worrying about it taking over the world and so on), and do lots of other things.

As one would expect when working for a free software consultancy, our customers do understand the value of the community and contributing back to it, and so does the customer for the project I’m working on. In fact, our customer insists we keep the number of locally applied patches to, for example, Linux kernel, to minimum, submitting as much as possible upstream.

However, apart from the upstreaming work which may be done for the customer, Collabora encourages us, the engineers, to spend up to two hours weekly for upstreaming on top of what customers need, and up to five days yearly as paid Community days. These community days may be spent working on the code or doing volunteering at free software events or even speaking at conferences.

Even though on this project I have already been paid for contributing to the free software project which I maintained in my free time previously (ifupdown), paid community time is a great opportunity to contribute to the projects I’m interested in, and if the projects I’m interested in coincide with the projects I’m working with, I effectively can spend even more time on them.

A bit unfortunately for me, I haven’t spent enough time last year to plan my community days, so I used most of them in the last weeks of the calendar year, and I used them (and some of my upstreaming hours) on something that benefitted both free software community and Collabora. I’m talking about SparkleShare, a cross-platform Git-based file synchronisation solution written in C#. SparkleShare provides an easy to use interface for Git, or, actually, it makes it possible to not use any Git interface at all, as it monitors the working directory using inotify and commits stuff right after it changes. It automatically handles conflicts even for binary files, even though I have to admit its handling could still be improved.

At Collabora, we use SparkleShare to store all sorts of internal documents, and it’s being used by users not familiar with command line interfaces too. Unfortunately, the version we recently had in Debian had a couple of very annoying bugs, making it a great pain to use it: it would not notice edits in local files, or not notice new commits being pushed to the server, and that led to individual users’ edits being lost sometimes. Not cool, especially when the document has to be sent to the customer in a couple of minutes.

The new versions, 1.4 (and recently released 1.5) was reported as being much better and also fixing some crashes, but it also used GTK+ 3 and some libraries not yet packaged for Debian. Thanh Tung Nguyen packaged these packages (and a newer SparkleShare) for Ubuntu and published them in his PPA, but they required some work to be fit for Debian.

I have never touched Mono packages before in my life, so I had to learn a lot. Some time was spent talking to upstream about fixing their copyright statements (they had none in the code, and only one author was mentioned in, and nowhere else in the source), a bit more time went into adjusting and updating the patches to the current source code version. Then, of course, waiting the packages to go through NEW. Fixing parallel build issues, waiting for buildds to all build dependencies for at least one architecture… But then, finally, on 19th of January I had the updated SparkleShare in Debian.

As you may have already guessed, this blog post has been sponsored by Collabora, the first of my employers to encourage require me to work on free software in my paid time :)

February 06, 2016 05:22 PM

February 01, 2016

Philip Withnall

DX hackfest 2016 aftermath

The DX hackfest, and FOSDEM, are over. Thanks everyone for coming — and thanks to betacowork, ICAB, the GNOME Foundation, and the various companies who allowed people to come along. Thanks to Collabora for sending me along and sponsoring snacks and dinner one evening.

What did we do?

by Philip Withnall at February 01, 2016 03:42 PM

January 30, 2016

Simon McVittie

GNOME Developer Experience hackfest: xdg-app + Debian

Over the last few days I've been at the GNOME Developer Experience hackfest in Brussels, looking into xdg-app and how best to use it in Debian and Debian derivatives.

xdg-app is basically a way to run "non-core" software on Linux distributions, analogous to apps on Android and iOS. It doesn't replace distributions like Debian or packaging systems, but it adds a layer above them. It's mostly aimed towards third-party apps obtained from somewhere that isn't your distribution vendor, aiming to address a few long-standing problems in that space:

  • There's no single ABI that can be called "a standard Linux system" in the same way there would be for Windows or OS X or Android or whatever, apart from LSB which is rather limited. Testing that a third-party app "works on Linux", or even "works on stable Linux releases from 2015", involves a combinatorial explosion of different distributions, desktop environments and local configurations. Steam uses the Steam Runtime, a chroot environment closely resembling Ubuntu 12.04 LTS; other vendors tend to test on a vaguely recent Ubuntu LTS and leave it at that.

  • There's no widely-supported mechanism for installing third-party applications as an ordinary user. used to distribute Ubuntu- and Debian-compatible .deb files, but installing a .deb involves running arbitrary vendor-supplied scripts as root, which should worry anyone who wants any sort of privilege-separation. (They have now switched to executable self-extracting installers, which involve running arbitrary vendor-supplied scripts as an ordinary user... better, but not perfect.)

  • Relatedly, the third-party application itself runs with the user's full privileges: a malicious or security-buggy third-party application can do more or less anything, unless you either switch to a different uid to run third-party apps, or use a carefully-written, app-specific AppArmor profile or equivalent.

To address the first point, each application uses a specified "runtime", which is available as /usr inside its sandbox. This can be used to run application bundles with multiple, potentially incompatible sets of dependencies within the same desktop environment. A runtime can be updated within its branch - for instance, if an application uses the "GNOME 3.18" runtime (consisting of a basic Linux system, the GNOME 3.18 libraries, other related libraries like Mesa, and their recursive dependencies like libjpeg), it can expect to see minor-version updates from GNOME 3.18.x (including any security updates that might be necessary for the bundled libraries), but not a jump to GNOME 3.20.

To address the second issue, the plan is for application bundles to be available as a single file, containing metadata (such as the runtime to use), the app itself, and any dependencies that are not available in the runtime (which the app vendor is responsible for updating if necessary). However, the primary way to distribute and upgrade runtimes and applications is to package them as OSTree repositories, which provide a git-like content-addressed filesystem, with efficient updates using binary deltas. The resulting files are hard-linked into place.

To address the last point, application bundles run partially isolated from the wider system, using containerization techniques such as namespaces to prevent direct access to system resources. Resources from outside the sandbox can be accessed via "portal" services, which are responsible for access control; for example, the Documents portal (the only one, so far) displays an "Open" dialog outside the sandbox, then allows the application to access only the selected file.

xdg-app for Debian

One thing I've been doing at this hackfest is improving the existing Debian/Ubuntu packaging for xdg-app (and its dependencies ostree and libgsystem), aiming to get it into a state where I can upload it to Debian experimental. Because xdg-app aims to be a general freedesktop project, I'm currently intending to make it part of the "Utopia" packaging team alongside projects like D-Bus and polkit, but I'm open to suggestions if people want to co-maintain it elsewhere.

In the process of updating xdg-app, I sent various patches to Alex, mostly fixing build and test issues, which are in the new 0.4.8 release.

I'd appreciate co-maintainers and further testing for this stuff, particularly ostree: ostree is primarily a whole-OS deployment technology, which isn't a use-case that I've tested, and in particular ostree-grub2 probably doesn't work yet.

Source code:

Binaries (no trust path, so only use these if you have a test VM):

  • deb xdg-app main

The "Hello, World" of xdg-apps

Another thing I set out to do here was to make a runtime and an app out of Debian packages. Most of the test applications in and around GNOME use the "freedesktop" or "GNOME" runtimes, which consist of a Yocto base system and lots of RPMs, are rebuilt from first principles on-demand, and are extensive and capable enough that they make it somewhat non-obvious what's in an app or a runtime.

So, here's a step-by-step route through xdg-app, first using typical GNOME instructions, but then using the simplest GUI app I could find - xvt, a small xterm clone. I'm using a Debian testing (stretch) x86_64 virtual machine for all this. xdg-app currently requires systemd-logind to put users and apps in cgroups, either with systemd as pid 1 (systemd-sysv) or systemd-shim and cgmanager; I used the default systemd-sysv. In principle it could work with plain cgmanager, but nobody has contributed that support yet.

Demonstrating an existing xdg-app

Debian's kernel is currently patched to be able to allow unprivileged users to create user namespaces, but make it runtime-configurable, because there have been various security issues in that feature, making it a security risk for a typical machine (and particularly a server). Hopefully unprivileged user namespaces will soon be secure enough that we can enable them by default, but for now, we have to do one of three things to let xdg-app use them:

  • enable unprivileged user namespaces via sysctl:

    sudo sysctl kernel.unprivileged_userns_clone=1
  • make xdg-app root-privileged (it will keep CAP_SYS_ADMIN and drop the rest):

    sudo dpkg-statoverride --update --add root root 04755 /usr/bin/xdg-app-helper
  • make xdg-app slightly less privileged:

    sudo setcap cap_sys_admin+ep /usr/bin/xdg-app-helper

First, we'll need a runtime. The standard xdg-app tutorial would tell you to download the "GNOME Platform" version 3.18. To do that, you'd add a remote, which is a bit like a git remote, and a bit like an apt repository:

$ wget
$ xdg-app remote-add --user --gpg-import=gnome-sdk.gpg gnome \

(I'm ignoring considerations like trust paths and security here, for brevity; in real life, you'd want to obtain the signing key via https and/or have a trust path to it, just like you would for a secure-apt signing key.)

You can list what's available in a remote:

$ xdg-app remote-ls --user gnome

The Platform runtimes are what we want here: they are collections of runtime libraries with which you can run an application. The Sdk runtimes add development tools, header files, etc. to be able to compile apps that will be compatible with the Platform.

For now, all we want is the GNOME 3.18 platform:

$ xdg-app install --user gnome org.gnome.Platform 3.18

Next, we can install an app that uses it, from Alex Larsson's nightly builds of a subset of GNOME. The server they're on doesn't have a great deal of bandwidth, so be nice :-)

$ wget
$ xdg-app remote-add --user --gpg-import=nightly.gpg nightly \
$ xdg-app install --user nightly org.mypaint.MypaintDevel

We now have one app, and the runtime it needs:

$ xdg-app list
$ xdg-app run org.mypaint.MypaintDevel
[you see a GUI window]

Digression: what's in a runtime?

Behind the scenes, xdg-app runtimes and apps are both OSTree trees. This means the ostree tool, from the package of the same name, can be used to inspect them.

$ sudo apt install ostree
$ ostree refs --repo ~/.local/share/xdg-app/repo

A "ref" has roughly the same meaning as in git: something like a branch or a tag. ostree can list the directory tree that it represents:

$ ostree ls --repo ~/.local/share/xdg-app/repo \
d00755 0 0      0 /
-00644 0 0    493 /metadata
d00755 0 0      0 /files
$ ostree ls --repo ~/.local/share/xdg-app/repo \
    runtime/org.gnome.Platform/x86_64/3.18 /files
d00755 0 0      0 /files
l00777 0 0      0 /files/local -> ../var/usrlocal
l00777 0 0      0 /files/sbin -> bin
d00755 0 0      0 /files/bin
d00755 0 0      0 /files/cache
d00755 0 0      0 /files/etc
d00755 0 0      0 /files/games
d00755 0 0      0 /files/include
d00755 0 0      0 /files/lib
d00755 0 0      0 /files/lib64
d00755 0 0      0 /files/libexec
d00755 0 0      0 /files/share
d00755 0 0      0 /files/src

You can see that /files in a runtime is basically a copy of /usr. This is not coincidental: the runtime's /files gets mounted at /usr inside the xdg-app container. There is also some metadata, which is in the ini-like syntax seen in .desktop files:

$ ostree cat --repo ~/.local/share/xdg-app/repo \
    runtime/org.gnome.Platform/x86_64/3.18 /metadata

[Extension org.freedesktop.Platform.GL]

[Extension org.freedesktop.Platform.Timezones]

[Extension org.gnome.Platform.Locale]


Looking at an app, the situation is fairly similar:

$ ostree ls --repo ~/.local/share/xdg-app/repo \
d00755 0 0      0 /
-00644 0 0    258 /metadata
d00755 0 0      0 /export
d00755 0 0      0 /files

This time, /files maps to what will become /app for the application, which was compiled with --prefix=/app:

$ ostree ls --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master /files
d00755 0 0      0 /files
-00644 0 0   4599 /files/manifest.json
d00755 0 0      0 /files/bin
d00755 0 0      0 /files/lib
d00755 0 0      0 /files/share

There is also a /export directory, which is made visible to the host system so that the contained app can appear as a "first-class citizen" in menus:

$ ostree ls --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master /export
d00755 0 0      0 /export
d00755 0 0      0 /export/share
user@debian:~$ ostree ls --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master /export/share
d00755 0 0      0 /export/share
d00755 0 0      0 /export/share/app-info
d00755 0 0      0 /export/share/applications
d00755 0 0      0 /export/share/icons
user@debian:~$ ostree ls --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master /export/share/applications
d00755 0 0      0 /export/share/applications
-00644 0 0    715 /export/share/applications/org.mypaint.MypaintDevel.desktop
$ ostree cat --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master \
[Desktop Entry]
Name=(Nightly) MyPaint
Exec=mypaint %f
Comment=Painting program for digital artists
GenericName=Raster Graphics Editor
GenericName[fr]=Éditeur d'Image Matricielle

Again, there's some metadata:

$ ostree cat --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master /metadata


[Extension org.mypaint.MypaintDevel.Debug]

Building a runtime, probably the wrong way

The way in which the reference/demo runtimes and containers are generated is... involved. As far as I can tell, there's a base OS built using Yocto, and the actual GNOME bits come from RPMs. However, we don't need to go that far to get a working runtime.

In preparing this runtime I'm probably completely ignoring some best-practices and tools - but it works, so it's good enough.

First we'll need a repository:

$ sudo install -d -o$(id -nu) /srv/xdg-apps
$ ostree init --repo /srv/xdg-apps

I'm just keeping this local for this demonstration, but you could rsync it to a web server's exported directory or something - a lot like a git repository, it's just a collection of files. We want everything in /usr because that's what xdg-app expects, hence usrmerge:

$ sudo mount -t tmpfs -o mode=0755 tmpfs /mnt
$ sudo debootstrap --arch=amd64 --include=libx11-6,usrmerge \
    --variant=minbase stretch /mnt
$ sudo mkdir /mnt/runtime
$ sudo mv /mnt/usr /mnt/runtime/files

This obviously has a lot of stuff in it that we don't need - most obviously init, apt and dpkg - but it's Good Enough™.

We will also need some metadata. This is sufficient:

$ sudo sh -c 'cat > /mnt/runtime/metadata'

That's a runtime. We can commit it to ostree, and generate xdg-app metadata:

$ ostree commit --repo /srv/xdg-apps \
    --branch runtime/org.debian.Debootstrap/x86_64/8.20160130 \
$ fakeroot ostree commit --repo /srv/xdg-apps \
    --branch runtime/org.debian.Debootstrap/x86_64/8.20160130
$ fakeroot xdg-app build-update-repo /srv/xdg-apps

(I'm not sure why ostree and xdg-app report "Operation not permitted" when we aren't root or fakeroot - feedback welcome.)

build-update-repo would presumably also be the right place to GPG-sign your repository, if you were doing that.

We can add that as another xdg-app remote:

$ xdg-app remote-add --user --no-gpg-verify local file:///srv/xdg-apps
$ xdg-app remote-ls --user local

Building an app, probably the wrong way

The right way to build an app is to build a "SDK" runtime - similar to that platform runtime, but with development files and tools - and recompile the app and any missing libraries with ./configure --prefix=/app && make && make install. I'm not going to do that, because simplicity is nice, and I'm reasonably sure xvt doesn't actually hard-code /usr into the binary:

$ install -d xvt-app/files/bin
$ sudo apt-get --download-only install xvt
$ dpkg-deb --fsys-tarfile /var/cache/apt/archives/xvt_2.1-20.1_amd64.deb \
    | tar -xvf - ./usr/bin/xvt
$ mv usr xvt-app/files

Again, we'll need metadata, and it's much simpler than the more production-quality GNOME nightly builds:

$ cat > xvt-app/metadata

$ fakeroot ostree commit --repo /srv/xdg-apps \
    --branch app/org.debian.packages.xvt/x86_64/2.1-20.1 xvt-app
$ fakeroot xdg-app build-update-repo /srv/xdg-apps
Updating appstream branch
No appstream data for runtime/org.debian.Debootstrap/x86_64/8.20160130
No appstream data for app/org.debian.packages.xvt/x86_64/2.1-20.1
Updating summary
$ xdg-app remote-ls --user local

The obligatory screenshot

OK, good, now we can install it:

$ xdg-app install --user local org.debian.Debootstrap 8.20160130
$ xdg-app install --user local org.debian.packages.xvt 2.1-20.1
$ xdg-app run --branch=2.1-20.1 org.debian.packages.xvt

xvt in a container

and you can play around with the shell in the xvt and see what you can and can't do in the container.

I'm sure there were better ways to do most of this, but I think there's value in having such a simplistic demo to go alongside the various GNOMEish apps.


Thanks to all those!

January 30, 2016 06:07 PM

Neil McGovern

On ZFS in Debian

shutterstock_366995438I’m currently over at FOSDEM, and have been asked by a couple of people about the state of ZFS and Debian. So, I thought I’d give a quick post to explain what Debian’s current plan is (which has come together with a lot of discussion with the FTP Masters and others around what we should do).

TLDR: It’s going in contrib, as a source only dkms module.

Longer version:

Debian has always prided itself in providing the unequivocally correct solution to our users and downstream distributions. This also includes licenses – we make sure that Debian will contain 100% free software. This means that if you install Debian, you are guaranteed freedoms offered under the DFSG and our social contract.

Now, this is where ZFS on Linux gets tricky. ZFS is licensed under the CDDL, and the Linux kernel under the GPLv2-only. The project views that both of these are free software licenses, but they’re incompatible with each other. This incompatibility means that there is risk to producing a combined work with Linux and a CDDL module. (Note: there is arguments about if a kernel module, once loaded, is a combined work with the kernel. I’m not touching that with a barge pole, as I Am Not A Lawyer.)

Now, does this mean that Debian would get sued by distributing ZFS natively compiled into the kernel? Well, maybe, but I think it’s a bit unlikely. This doesn’t mean it’s the right choice for Debian to take as a project though! It brings us back to our promise to our users, and our commercial and non-commercial downstream distributions. If a commercial downstream distribution took the next release of stable, and used our binaries, they may well get sued if they have enough money to make it worthwhile. Additionally, Debian has always taken its commitment to upstream licenses very seriously. If there’s a doubt, it doesn’t go in official Debian.

It should be noted that ZFS is something that is important to a lot of Debian users, who all want to be able to use ZFS in a manner that makes it easier for them to install. Thus, the position that we’ve arrived at is that we can ship ZFS as a source only, DKMS module. This means it will be built on the target machines, and we’re not distributing binaries. There’s also a warning in the README.Debian file explaining that care should be taken if you do things with the resultant binary – as we can’t promise it complies with the licenses.

Finally, I should point out that this isn’t my decision in the end. The contents of the archive is a decision for the FTP-Masters, as it’s delegated. However, what I have been able to do is coordinate many conflicting views, and I hope that ZFS will be accepted into the archive soon!

by Neil McGovern at January 30, 2016 04:35 PM

January 29, 2016

Philip Withnall

Instrumenting the GLib main loop with Dunfell

tl;dr: Visualise your main context and sources using Dunfell. Feedback and ideas welcome.

At the DX hackfest, I’ve been working on a new tool for instrumenting and visualising the behaviour of the GLib main context (or main contexts) in your program.

Screenshot from 2016-01-29 11-17-35

It’s called Dunfell (because I’m a sucker for hills) and at a high level it works by using SystemTap to record various GMainContext interactions in your program, saving them to a log file. The log file can then be examined using a viewer program.

The source is available on GitLab or GitHub because I still haven’t decided which is better.

In the screenshot above, each vertical line is a thread, each blue box is one dispatch phase of the main context which is currently running on that thread, each orange blob is a new GSource being created, and the green blob is a GSource which has been selected for closer inspection.

At the moment, it requires a couple of GLib patches to add some more SystemTap probe points, and it also requires a recent version of GTK+. It needs SystemTap, and I’ve only tested it on Fedora, so it might need some patching to work with the SystemTap installed on other distributions.

Screenshot from 2016-01-29 11-57-39

This screenshot is of a trace of the buffered-input-stream test from GIO, showing I/O callbacks being made across threads as idle source callbacks.

More visualisation ideas are welcome! At the moment, what Dunfell draws is quite simplistic. I hope it will be able to solve various common debugging problems eventually but suggestions for ways to do this intuitively, or for other problems to visualise, are welcome. Here are the use cases I was initially thinking about (from the README):

  • Detect GSources which are never added to a GMainContext.
  • Detect GSources which are dispatched too often (i.e. every main context iteration).
  • Detect GSources whose dispatch function takes too long (and hence blocks the main context).
  • Detect GSources which are never removed from their GMainContext after being dispatched (but which are never dispatched again).
  • Detect GMainContexts which have GSources attached or (especially) events pending, but which aren’t being iterated.
  • Monitor the load on each GMainContext, such as how many GSources it has attached, and how many events are processed each iteration.
  • Monitor ongoing asynchronous calls and GTasks, giving insight into their nesting and dependencies.
  • Monitor unfinished or stalled asynchronous calls.
  • Allow users to record logs to send to the developers for debugging on a different machine. The users may have to install additional software to record these logs (some component of Dunfell, plus its dependencies), but should not have to recompile or otherwise modify the program being debugged.
  • Work with programs which purely use GLib, through to programs which use GLib, GIO and GTK+.
  • Allow visualisation of this data, both in a standalone program, and in an IDE such as GNOME Builder.
  • Allow visualising differences between two traces.
  • Minimise runtime overhead of logging a program, to reduce the risk of disturbing race conditions by enabling logging.
  • Connecting to an already-running program is not a requirement, since by the time you’ve decided there’s a problem with a program, it’s already in the wrong state.

by Philip Withnall at January 29, 2016 11:14 AM

January 26, 2016

Philip Withnall

DX hackfest: 2016 edition

By this time tomorrow, the 2016 edition of the GNOME developer experience hackfest will have started. This year, it’s in Brussels, kindly hosted by betacowork and ICAB.

betacowork-coworking-brussels-logo-web logo_white

We will be spending 3 days looking at a variety of things on the agenda to improve the lives of developers on GNOME, and make plans for the rest of the year. Watch out for updates on

Thanks to the GNOME Foundation for sponsoring the travel for various people who are coming.


Collabora is sponsoring snacks throughout, and is sending 5 of us along for the hackfest. Thank you also to the other companies who are sending or letting people come — I know of Red Hat, Endless Mobile, Codethink and Canonical (please let me know if I’ve forgotten anyone!).

Codethink logo200px-Canonical_logo.svgRedHat.svgsplash-endless-mark-transCollabora logo

See people at FOSDEM afterwards?

by Philip Withnall at January 26, 2016 10:14 AM

January 21, 2016

Philip Withnall

Checking JSON files for correctness

tl;dr: Write a Schema for your JSON format, and use Walbottle to validate your JSON files against it.

As JSON becomes used more and more in place of XML, we need a replacement for tools like xmllint to check that JSON documents follow whatever format they are supposed to be following.

Walbottle is a tool to do this, which I’ve been working on as part of client work at Collabora. Firstly, a brief introduction to JSON Schema, then I will give an example of how to integrate Walbottle into an application. In a future post I hope to explain some of the theory behind its test vector generation.

JSON Schema is a standard for describing how a particular type of JSON document should be structured. (There’s a good introduction on the Space Telescope Science Institute.) For example, what properties should be in the top-level object in the document, and what their types should be. It is entirely analogous to XML Schema (or Relax NG). It becomes a little confusing in the fact that JSON Schema files are themselves JSON, which means that there is a JSON Schema file for validating that JSON Schema files are well-formed; this is the JSON meta-schema.

Here is an example JSON Schema file (taken from the JSON Schema website):

	"title": "Example Schema",
	"type": "object",
	"properties": {
		"firstName": {
			"type": "string"
		"lastName": {
			"type": "string"
		"age": {
			"description": "Age in years",
			"type": "integer",
			"minimum": 0
	"required": ["firstName", "lastName"]

Valid instances of this JSON schema are, for example:

	"firstName": "John",
	"lastName": "Smith"


	"firstName": "Jessica",
	"lastName": "Smith",
	"age": 31

or even:

	"firstName": "Sandy",
	"lastName": "Sanderson",
	"country": "England"

The final example is important: by default, JSON object instances are allowed to contain properties which are not defined in the schema (because the default value for the JSON Schema additionalProperties keyword is an empty schema, rather than false).

What does Walbottle do? It takes a JSON Schema as input, and can either:

  • check the schema is a valid JSON Schema (the json-schema-validate tool);
  • check that a JSON instance follows the schema (the json-validate tool); or
  • generate JSON instances from the schema (the json-schema-generate tool).

Why is the last option useful? Imagine you have written a library which interacts with a web API which returns JSON. You use json-glib to turn the HTTP responses into a JSON syntax tree (tree of JsonNodes), but you have your own code to navigate through that tree and extract the interesting bits of the response, such as success codes or new objects from the server. How do you know your code is correct?

Ideally, the web API author has provided a JSON Schema file which describes exactly what you should expect from one of their HTTP responses. You can use json-schema-generate to generate a set of example JSON instances which follow or subtly do not follow the schema. You can then run your code against these instances, and check whether it:

  • does not crash;
  • correctly accepts the valid JSON instances; and
  • correctly rejects the invalid JSON instances.

This should be a lot better than writing such unit tests by hand, because nobody wants to spend time doing that — and even if you do, you are almost guaranteed to miss a corner case, which leaves your code prone to crashing when given unexpected input. (Alarmists would say that it is vulnerable to attack, and that any such vulnerability of network-facing code is probably prone to escalation into arbitrary code execution.)

For the example schema above, json-schema-generate returns (amongst others) the following JSON instances:


They include valid and invalid instances, which are designed to try and hit boundary conditions in typical json-glib-using code.

How do you integrate Walbottle into your project? Probably the easiest way is to use it to generate a C or H file of JSON test vectors, and link or #include that into a simple test program which runs your code against each of them in turn.

Here is an example, straight from the documentation. Add the following to


      [AC_MSG_ERROR([json-schema-validate not found])])
      [AC_MSG_ERROR([json-schema-generate not found])])

Add this to the for your tests:

json_schemas = \
	my-format.schema.json \
	my-other-format.schema.json \

EXTRA_DIST += $(json_schemas)

check-json-schema: $(json_schemas)
check-local: check-json-schema
.PHONY: check-json-schema

json_schemas_h = $(json_schemas:.schema.json=.schema.h)
BUILT_SOURCES += $(json_schemas_h)
CLEANFILES += $(json_schemas_h)

%.schema.h: %.schema.json
		--c-variable-name=$(subst -,_,$(notdir $*))_json_instances \
		--format c $^ > $@

my_test_suite_SOURCES = my-test-suite.c
nodist_my_test_suite_SOURCES = $(json_schemas_h)

And add this to your test suite C file itself:

#include "my-format.schema.h"


// Test the parser with each generated test vector from the JSON schema.
static void
test_parser_generated (gconstpointer user_data)
  guint i;
  GObject *parsed = NULL;
  GError *error = NULL;

  i = GPOINTER_TO_UINT (user_data);

  parsed = try_parsing_string (my_format_json_instances[i].json,
                               my_format_json_instances[i].size, &error);

  if (my_format_json_instances[i].is_valid)
      // Assert @parsed is valid.
      g_assert_no_error (error);
      g_assert (G_IS_OBJECT (parser));
      // Assert parsing failed.
      g_assert_error (error, SOME_ERROR_DOMAIN, SOME_ERROR_CODE);
      g_assert (parsed == NULL);

  g_clear_error (&error);
  g_clear_object (&parsed);


main (int argc, char *argv[])
  guint i;


  for (i = 0; i < G_N_ELEMENTS (my_format_json_instances); i++)
      gchar *test_name = NULL;

      test_name = g_strdup_printf ("/parser/generated/%u", i);
      g_test_add_data_func (test_name, GUINT_TO_POINTER (i),
      g_free (test_name);


Walbottle is heading towards being mature. There are some features of the JSON Schema standard it doesn’t yet support: $ref/definitions and format. Its main downside at the moment is speed: test vector generation is complex, and the algorithms slow down due to computational complexity with lots of nested sub-schemas (so try to design your schemas to avoid this if possible). json-schema-generate recently acquired a --show-timings option which gives debug information about each of the sub-schemas in your schema, how many JSON instances it generates, and how long that took, which gives some insight into how to optimise the schema.

by Philip Withnall at January 21, 2016 09:38 AM

January 12, 2016

Gustavo Padovan

Collabora contributions to Linux Kernel 4.4

Linux Kernel 4.4 was released this week and Collabora engineers helped in the development of the new kernel in a few different areas. A total of 38 patches from 8 Collabora engineers were added, making it the kernel release with the most Collabora developers ever! Only 7 of the 8 engineers are still part Collabora however, as unfortunately Javier left a few months ago, after completing his patches.

On that note, Collabora is hiring experienced kernel hackers to further increase our participation in the Linux Kernel. If you are interested, please drop a line!

In this release Daniel Stone fixed a potential circular deadlock when loading the i915 GuC firmware and incorrect pipe paramenter on drm_crtc_send_vblank_event() that was leading to WARN_ON. Danilo Cesar Lemes de Paula improved the kernel-doc script to fix an issue with struct drm_modeset_lock not showing at the final kernel Doc and fixes a fault in the highlight processing by using arrays instead of hashes.

Emilio López enabled EC verified boot context on Peach Boards and driver to read/write nvram’s verified boot context to/from userspace for Chromebook devices and Enric Balletbo i Serra added support for TI’s tps65217 charger driver while Gustavo Padovan added cursor support on exynos DRM driver. Javier did some improvements to the Chromebook EC driver.

Sjoerd Simons added rockchip support by default on ARM multi_v7_defconfig and a driver for the SPDIF audio transceiver on rockchip boards. Tomeu Vizoso removed the regulator_list as it was redundant because the regulators devices can be found through the regulator_class, fixed an clk reparenting issue on exynos5250 that was preventing the screen to work after the second suspend.

A full list of all commits is provided here:

Daniel Stone (2):

Danilo Cesar Lemes de Paula (3):

Emilio López (3):

Enric Balletbo i Serra (3):

Gustavo Padovan (3):

Javier Martinez Canillas (3):

Sjoerd Simons (17):

Tomeu Vizoso (4):

by Gustavo Padovan at January 12, 2016 01:39 PM

January 11, 2016

Philip Withnall


tl;dr: G_OBJECT(NULL) evaluates to NULL with no side effects; G_IS_OBJECT(NULL) evaluates to FALSE with no side effects. The same is true for these macros for GObject subclasses.

Someone recently pointed out a commonly-misunderstood point about the GObject casting and type checking macros: they all happily accept NULL, without printing errors or causing assertion failures.

If you call G_OBJECT (NULL), it is a dynamically checked type cast of NULL, and evaluates to NULL, just as if you’d written (GObject *) NULL.

If you call G_IS_OBJECT (NULL), it evaluates to FALSE.

The misconception seems to be that they cause assertion failures. I think that arises from the fact that G_IS_OBJECT is commonly used with g_return_if_fail(), which does cause an assertion failure if G_IS_OBJECT returns FALSE.

Similarly, this all applies to the macros for GObject subclasses, like GTK_WIDGET and GTK_IS_WIDGET, or G_FILE and G_IS_FILE, etc.

Reference: g_type_check_instance_cast() and g_type_check_instance_is_a(), which are what these macros evaluate to.

by Philip Withnall at January 11, 2016 12:12 PM

January 10, 2016

Andrew Shadura

Public transport map of Managua

Holger Levsen writes about the public transport map of Managua, Nicaragua, which is, according to him, the first detailed map of Managua’s bus network:

If you haven’t been to Managua, you might not be able to immediatly appreciate the usefulness of this. Up until now, there has been no map nor timetable for the bus system, which as you can see now easily and from far away, is actually quite big and is used by 80% of the population in a city, where the streets still have no names.

Having had a look at the map they produced, I have to admit I quite liked it:, Rutas de Managua y Ciudad Sandino

MapaNica, the community behind said map, are raising funds to make lives of locals easier by publishing a printed version of the map and distributing it. They have already raised more than $3300 of their $7500 goal. Every further donation will help them print more maps.

Please go to and support their initiative!

January 10, 2016 04:26 PM

January 08, 2016

Jordi Mallach

Weird VirtIO errors on a jessie KVM host: Fixed!

Yesterday I posted a desperate please for help as I had no idea where else to look for clues on what was causing random I/O errors on the guests of our jessie KVM host.

Thanks to Michael Herold, who was kind enough to mail me after identifying our problem, now we know os-prober is to blame, triggering the problem on every kernel update on the host, and we have quickly uninstalled it from all our systems.

Thanks Michael! If you by any chance going to attend FOSDEM, I am so happily going to buy you beers!

Let's hope anyone else wondering what's going on with their filesystems will find the trail to these blog posts to find a quick solution!

January 08, 2016 08:57 AM

January 07, 2016

Jordi Mallach

Weird VirtIO errors on a jessie KVM host running Debian guests

Hi Interwebs! I'm facing a weird issue with one of our server's at work, involving Debian jessie, libvirt and Debian guests using VirtIO drivers. This is a plea for help. :)

Basically, we are getting random VirtIO errors inside our guests, resulting in stuff like this:

[4735406.568235] blk_update_request: I/O error, dev vda, sector 142339584
[4735406.572008] EXT4-fs warning (device dm-0): ext4_end_bio:317: I/O error -5 writing to inode 1184437 (offset 0 size 208896 starting block 17729472)
[4735406.572008] Buffer I/O error on device dm-0, logical block 17729472
[ ... ]
[4735406.572008] Buffer I/O error on device dm-0, logical block 17729481
[4735406.643486] blk_update_request: I/O error, dev vda, sector 142356480
[ ... ]
[4735406.748456] blk_update_request: I/O error, dev vda, sector 38587480
[4735411.020309] Buffer I/O error on dev dm-0, logical block 12640808, lost sync page write
[4735411.055184] Aborting journal on device dm-0-8.
[4735411.056148] Buffer I/O error on dev dm-0, logical block 12615680, lost sync page write
[4735411.057626] JBD2: Error -5 detected when updating journal superblock for dm-0-8.
[4735411.057936] Buffer I/O error on dev dm-0, logical block 0, lost sync page write
[4735411.057946] EXT4-fs error (device dm-0): ext4_journal_check_start:56: Detected aborted journal
[4735411.057948] EXT4-fs (dm-0): Remounting filesystem read-only
[4735411.057949] EXT4-fs (dm-0): previous I/O error to superblock detected

(From an Ubuntu 15.04 guest, EXT4 on LVM2)


Jan 06 03:39:11 titanium kernel: end_request: I/O error, dev vda, sector 1592467904
Jan 06 03:39:11 titanium kernel: EXT4-fs warning (device vda3): ext4_end_bio:317: I/O error -5 writing to inode 31169653 (offset 0 size 0 starting block 199058492)
Jan 06 03:39:11 titanium kernel: Buffer I/O error on device vda3, logical block 198899256
[ ... ]
Jan 06 03:39:12 titanium kernel: Aborting journal on device vda3-8.
Jan 06 03:39:12 titanium kernel: Buffer I/O error on device vda3, logical block 99647488

(From a Debian jessie guest, EXT4 directly on a VirtIO-based block device)

When this happens, it affects multiple guests on the hosts at the same time. Normally they are severe enough that they end up with a r/o file system, but we've seen a few hosts survive with a non-fatal I/O error. The host's dmesg has nothing interesting to see.

We've seen this happen with quite heterogeneous guests:

  • Debian 6, 7 and 8 (Debian kernels 2.6.32, 3.2 and 3.16)
  • Ubuntu 14.09 and 15.04 (Ubuntu kernels)
  • 32 bit and 64 bit installs.

In short, we haven't seen a clear characteristic in any guest, other than the affected hosts being the ones with some sustained I/O load (build machines, cgit servers, PostgreSQL RDBMs...). Most of the times, hosts that just sit there doing nothing with their disks are not affected.

The host is a stock Debian jessie install that manages libvirt-based QEMU guests. All the guests have their block devices using virtio drivers, some of them on spinning media based on LSI RAID (was a 3ware card before, got replaced as we were very suspicious about it, but are getting the same results), and some of them based on PCIe SSD storage. We have some other 3 hosts, similar setup except they run Debian wheezy (and honestly we're not too keen on upgrading them yet, just in case), none of them has ever shown this kind of problem.

We've been seeing this since last summer, and haven't found a pattern that tells us where these I/O error bugs are coming from. Google isn't revealing other people with a similar problem, and we're finding that quite surprising as our setup is quite basic.

So, dear Interwebs, have you seen this? We could use any comment (on the blog, or in Debian bug #810121, or kernel bug 110441) that clues us on what's to blame here. Thanks in advance!

Update: We finally know what's going on! The problem is gone at long last!

January 07, 2016 02:20 PM

December 22, 2015

Pekka Paalanen

Wayland protocol design: object lifespan

Now that we have a few years of experience with the Wayland protocol, I thought I would put some of my observations in writing. This, what will hopefully become a series rather than just one post, considers how to design Wayland protocol extensions the right way.

This first post considers protocol object lifespan and the related races between the compositor/server and the client. I assume that the reader is already aware of the Wayland protocol basics. If not, I suggest reading Chapter 4. Wayland Protocol and Model of Operation.

How protocol objects are created

On a new Wayland connection, the only object that exists is the wl_display which is a specially constructed object. You always have it, and there is no wire protocol for creating it.

The only thing the client can create next is a wl_registry through the wl_display. Registry is the root of the whole interface (class) hierarchy. Wl_registry advertises the global objects by numerical name, and using wl_registry.bind request to bind to a global is the first normal way to create a protocol object.

Binding is slightly special still, as the protocol specification in XML for wl_registry uses the new_id argument type, but does not specify the interface (class) for the new object. In the wire protocol, this special argument gets turned into three arguments: interface name (string), interface version (uint32_t), and the new object ID (uint32_t). This is unique in the Wayland core protocol.

The usual way to create a new protocol object is for the client to send a request that has a new_id type of argument. The protocol specification (XML) defines what the interface is, so there is no need to communicate the interface type over the wire. All that is needed on the wire is the new object ID. Almost all object creation happens this way.

Although rare, also the server may create protocol objects for the client. This happens by having a new_id type of argument in an event. Every time the client receives this event, it receives a new protocol object.

As all requests and events are always part of some interface (like a member of a class), this creates an interface hierarchy. For example, wl_compositor objects are created from wl_registry, and wl_surface objects are created from wl_compositor.

Object creation never fails. Once the request or event is sent, the new objects it creates exists, period. This keeps the protocol asynchronous, as there is no need to reply or check that the creation succeeded.

How protocol objects are destroyed

There are two ways to destroy a protocol object. By far the most common one is to have a request in the interface that is specified to be a destructor. Most often this request is called "destroy". When the client code calls the function wl_foobar_destroy(), the request is sent to the server and the client side proxy (struct wl_proxy) for the object gets destroyed. The server then handles the destructor request at some point in the future.

The other way is to destroy the object by an event. In that case, no destructor must be defined in the interface's protocol specification, and the event must be clearly documented to be destructive as there is no automation nor safeties for this. This is for cases where the server decides when an object dies, and requires extreme care in protocol design to work right in all cases. When a client receives such an event, all it can do is destroy the proxy. The (in)famous example of an interface like this is wl_callback.

Enter the boogeyman: races

It is very important that both the client and the server agree on which protocol objects exist. If the client sends a request on, or references as an argument, an object that does not exist in the server's opinion, the server raises a protocol error, and disconnects the client. Obviously this should never happen, nor should it happen that the server sends an event to an object that the client destroyed.

Wayland being a completely asynchronous protocol, we have no implicit guarantees. The server may send an event at the same time as the client destroys the object, and now the event targets an object the client does not know about anymore. Rather than the client shooting itself dead (that's the server's job), we have a trick in libwayland-client: it silently ignores events to destroyed objects, until the server confirms that the object is truly gone.

This works very well for interfaces where the destructor is a request. If the client first sends the destructor request and then sends another request on the destroyed object, it just shot its own head off - no race needed.

Things get tricky for the other case, destructor events. The server may send the destructor event at the same time the client is sending a request on the same object. When the server finally gets the request, the object is already gone, and the client gets taken behind the shed and shot. Therefore pretty much the only safe way to use destructor events is if the interface does not define any requests at all. Ever, not even in future extensions. Furthermore, objects with that interface should not be used as arguments anywhere, or you may hit the race. That is why destructor events are difficult to use right.

The boogeyman's brother

There is yet another nasty race with events that create objects, i.e. server-created objects. If the client is destroying the (parent) object at the same time as the server is sending an event on that object, creating a new (child) object, the server cannot know if the client actually handled the event or not. If the client ignored the event, it will never tell the server to destroy that new object, and you leak in the server.

You could try to make your way out of that pitfall by writing in your protocol specification, that when the (parent) object is destroyed, all the child objects will be destroyed implicitly. But then the client must not send the destructor request for the child objects after it has destroyed the parent, because otherwise the server sees requests on objects it does not know about, and kicks you in the groin, hard. If the child interface defines a destructor, the client cannot destroy its proxies after destroying the parent object. If the child interface does not define a destructor, you can never free the server-side resources until the parent gets destroyed.

The client could destroy all the child objects with a defined destructor in one go, and then immediately destroy the parent object. I am not sure if that works, but it might. If it does not, you have to specify a whole tear-down protocol sequence. The client tells the server it wants to destroy the parent object, the server acks and guarantees it no longer sends any events on it, then the client actually destroys the parent object. Hey, you have a round-trip and just turned a beautiful asynchronous protocol into synchronous, congratulations!

Concluding with recommendations

Here are my recommendations when designing Wayland protocol extensions:
  • Always make sure there is a guaranteed way to destroy all objects. This may sound obvious, but we have fixed several cases in the Wayland core protocol where there was no way to destroy a created protocol object such, that all resources on both server and client side could be freed. And there are still some cases not fixed.
  • Always define a destructor request. If you have any doubt whether your new interface needs a destructor request, just put it there. It is more awkward to add later than normal requests. If you do not have one, the client cannot tell the server to free those protocol object resources.
  • Do not use destructor events. They are hard to design right, and extending the interface later will be a bitch. The client cannot tell the server to free the resources, so objects with destructor events should be short-lived, and the destruction must be guaranteed.
  • Do not use server-side created objects without a serious thought. Designing the destruction sequence such that it never leaks nor explodes is tricky.

by pq ( at December 22, 2015 06:52 AM

November 30, 2015

Andrew Shadura

Support Software Freedom Conservancy

The Software Freedom Conservancy are desperately looking for financial support after one of their corporate supporters have stopped their sponsorship. This week, there’s an anonymous pledge to match donations from new supporters.

Becoming an SFC supporter will help them fight for our software freedom. I have signed up for a monthly donation, and I suggest you do so too here.

November 30, 2015 06:26 PM

November 19, 2015

Philip Withnall

Hitori available as an xdg-app preview

I’ve been playing a bit with xdg-app recently, and just spent half an hour packaging Hitori as an xdg-app bundle. It was remarkably easy — in fact, it only took 7 commands.

If you want to install it, make sure you’ve got xdg-app installed, then:

# Install to /usr/share/ostree/trusted.gpg.d first
# Then set up the runtimes (if you haven’t already)
xdg-app add-remote --user gnome-sdk
xdg-app install-runtime --user gnome-sdk org.gnome.Platform 3.18
xdg-app install-runtime --user gnome-sdk org.freedesktop.Platform 1.2
# Add the repository for Hitori, then install it
xdg-app add-remote --user --no-gpg-verify pwithnall-hitori
xdg-app install-app --user pwithnall-hitori org.gnome.Hitori
# Play the game!
xdg-app run org.gnome.Hitori

What works?

  • Playing the game
  • Use of X11
  • Icons
  • Desktop file

What doesn’t work (yet)?

  • GSettings access (I may have misconfigured this, because it’s supposed to work)
  • Help manual

I don’t plan to support this repository especially well, since it was just for playing around, but it shows that xdg-app is coming along nicely!

by Philip Withnall at November 19, 2015 11:41 AM

November 15, 2015

Simon McVittie

Discworld Noir in a Windows 98 virtual machine on Linux

Discworld Noir was a superb adventure game, but is also notoriously unreliable, even in Windows on real hardware; using Wine is just not going to work. After many attempts at bringing it back into working order, I've settled on an approach that seems to work: now that qemu and libvirt have made virtualization and emulation easy, I can run it in a version of Windows that was current at the time of its release. Unfortunately, Windows 98 doesn't virtualize particularly well either, so this still became a relatively extensive yak-shaving exercise.


These instructions assume that /srv/virt is a suitable place to put disk images, but you can use anywhere you want.

The emulated PC

After some trial and error, it seems to work if I configure qemu to emulate the following:

  • Fully emulated hardware instead of virtualization (qemu-system-i386 -no-kvm)
  • Intel Pentium III
  • Intel i440fx-based motherboard with ACPI
  • Real-time clock in local time
  • No HPET
  • 256 MiB RAM
  • IDE primary master: IDE hard disk (I used 30 GiB, which is massively overkill for this game; qemu can use sparse files so it actually ends up less than 2 GiB on the host system)
  • IDE primary slave, secondary master, secondary slave: three CD-ROM drives
  • PS/2 keyboard and mouse
  • Realtek AC97 sound card
  • Cirrus video card with 16 MiB video RAM

A modern laptop CPU is an order of magnitude faster than what Discworld Noir needs, so full emulation isn't a problem, despite being inefficient.

There is deliberately no networking, because Discworld Noir doesn't need it, and a 17 year old operating system with no privilege separation is very much not safe to use on the modern Internet!

Software needed

  • Windows 98 installation CD-ROM as a .iso file (cp /dev/cdrom windows98.iso) - in theory you could also use a real optical drive, but my laptop doesn't usually have one of those. I used the OEM disc, version 4.10.1998 (that's the original Windows 98, not the Second Edition), which came with a long-dead PC, and didn't bother to apply any patches.
  • A Windows 98 license key. Again, I used an OEM key from a past PC.
  • A complete set of Discworld Noir (English) CD-ROMs as .iso files. I used the UK "Sold Out Software" budget release, on 3 CDs.
  • A multi-platform Realtek AC97 audio driver.

Windows 98 installation

It seems to be easiest to do this bit by running qemu-system-i386 manually:

qemu-img create -f qcow2 /srv/virt/discworldnoir.qcow2 30G
qemu-system-i386 -hda /srv/virt/discworldnoir.qcow2 \
    -drive media=cdrom,format=raw,file=/srv/virt/windows98.iso \
    -no-kvm -vga cirrus -m 256 -cpu pentium3 -localtime

Don't start the installation immediately. Instead, boot the installation CD to a DOS prompt with CD-ROM support. From here, run


and create a single partition filling the emulated hard disk. When finished, hard-reboot the virtual machine (press Ctrl+C on the qemu-system-i386 process and run it again).

The DOS FORMAT.COM utility is on the Windows CD-ROM but not in the root directory or the default %PATH%, so you'll have to run:

d:\win98\format c:

to create the FAT filesystem. You might have to reboot again at this point.

The reason for doing this the hard way is that the Windows 98 installer doesn't detect qemu as supporting ACPI. You want ACPI support, so that Windows will issue IDLE instructions from its idle loop, instead of occupying a CPU core with a busy-loop. To get that, boot to a DOS prompt again, and use:

setup /p j /iv

/p j forces ACPI support (Thanks to "Richard S" on the Virtualbox forums for this tip.) /iv is unimportant, but it disables the annoying "billboards" during installation, which advertised once-exciting features like support for dial-up modems and JPEG wallpaper.

I used a "Typical" installation; there didn't seem to be much point in tweaking the installed package set when everything is so small by modern standards.

Windows 98 has built-in support for the Cirrus VGA card that we're emulating, so after a few reboots, it should be able to run in a semi-modern resolution and colour depth. Discworld Noir apparently prefers a 640 × 480 × 16-bit video mode, so right-click on the desktop background, choose Properties and set that up.

Audio drivers

This is the part that took me the longest to get working. Of the sound cards that qemu can emulate, Windows 98 only supports the SoundBlaster 16 out of the box. Unfortunately, the Soundblaster 16 emulation in qemu is incomplete, and in particular version 2.1 (as shipped in Debian 8) has a tendency to make Windows lock up during boot.

I've seen advice in various places to emulate an Eqsonic ES1370 (SoundBlaster AWE 64), but that didn't work for me: one of the drivers I tried caused Windows to lock up at a black screen during boot, and the other didn't detect the emulated hardware.

The next-oldest sound card that qemu can emulate is a Realtek AC97, which was often found integrated into motherboards in the late 1990s. This one seems to work, with the "A400" driver bundle linked above. For Windows 98 first edition, you need a driver bundle that includes the old "VXD" drivers, not just the "WDM" drivers supported by Second Edition and newer.

The easiest way to get that into qemu seems to be to turn it into a CD image:

genisoimage -o /srv/virt/discworldnoir-drivers.iso WDM_A400.exe
qemu-system-i386 -hda /srv/virt/discworldnoir.qcow2 \
    -drive media=cdrom,format=raw,file=/srv/virt/windows98.iso \
    -drive media=cdrom,format=raw,file=/srv/virt/discworldnoir-drivers.iso \
    -no-kvm -vga cirrus -m 256 -cpu pentium3 -localtime -soundhw ac97

Run the installer from E:, then reboot with the Windows 98 CD inserted, and Windows should install the driver.

Installing Discworld Noir

Boot up the virtual machine with CD 1 in the emulated drive:

qemu-system-i386 -hda /srv/virt/discworldnoir.qcow2 \
    -drive media=cdrom,format=raw,file=/srv/virt/DWN_ENG_1.iso \
    -no-kvm -vga cirrus -m 256 -cpu pentium3 -localtime -soundhw ac97

You might be thinking "... why not insert all three CDs into D:, E: and F:?" but the installer expects subsequent disks to appear in the same drive where CD 1 was initially, so that won't work. Instead, when prompted for a new CD, switch to the qemu monitor with Ctrl+Alt+2 (note that this is 2, not F2). At the (qemu) prompt, use info block to see a list of emulated drives, then issue a command like

change ide0-cd1 /srv/virt/DWN_ENG_2.iso

to swap the CD. Then switch back to Windows' console with Ctrl+Alt+1 and continue installation. I used a Full installation of Discworld Noir.

Transferring the virtual machine to GNOME Boxes

Having finished the "control freak" phase of installation, I wanted a slightly more user-friendly way to run this game, so I transferred the virtual machine to be used by libvirtd, which is the backend for both GNOME Boxes and virt-manager:

virsh create discworldnoir.xml

Here is the configuration I used. It's a mixture of automatic configuration from virt-manager, and hand-edited configuration to make it match the qemu-system-i386 command-line.

Running the game

If all goes well, you should now see a discworldnoir virtual machine in GNOME Boxes, in which you can boot Windows 98 and play the game. Have fun!

November 15, 2015 05:19 PM

November 12, 2015

Gustavo Padovan

Collabora contributions to Linux Kernel 4.3

Collabora developers contributed 48 patches to kernel 4.3 as part of our current projects.

Danilo worked on the kernel doc scripts to add  cross-reference links to html documentation and arguments documentation in struct body. While Sjoerd Simons fixed a clock definition in rockchip and a incorrect udelay usage for the stmmac phy reset delay.

Tomeu fixed gpiolib to defer probe if the pin controller isn’t available, added another fix to chipidea USB to defer probe of usbmisc hasn’t been probed yet. On Tegra Tomeu worked to support to gpio-ranges property. Still on Tegra cpuidle_state.enter_freeze() was added.

Gustavo Padovan did a lot of exynos DRM work, with the most important changes being improvements to atomic modesetting, including the asynchronous atomic commit in exynos, in async mode we just schedule the atomic update and return right away to the userspace, in a similar way that PageFlips works in the old API. In this release the exynos atomic modesetting interface was enabled for userspace usage. Another important set of patches was the removal of structs exynos_drm_display and exynos_drm_encoder layers which greatly improved the code making it cleaner and easier to use. Apart from that there is also a few cleanup and fixes.

Danilo Cesar Lemes de Paula (2):

Gustavo Padovan (36):

Javier Martinez Canillas (1):

Sjoerd Simons (2):

Tomeu Vizoso (7):

by Gustavo Padovan at November 12, 2015 12:20 PM

November 09, 2015

Jeremy Whiting

kdesrc-build is a very useful tool, here's why

I've been thinking for some time about writing a post about my favorite tool for building, rebuilding, testing, fixing random parts of kde software and how I use it (many times a day, depending on the situation).

For anyone that doesn't know, kdesrc-build is a script, written in perl, it lives in extragear/utils/kdesrc-build in the kde project heirarchy and can be cloned from kde:kdesrc-build if you've got your ~/.gitconfig as follows (if you don't you should add it, go add it now, I'll wait):

[url "git://"]
       insteadOf = kde:
[url ""]
       pushInsteadOf = kde:
kdesrc-build is very useful in that running it with no arguments it will build all of your kde stack. This includes all the frameworks (including Qt if you want it to), all library dependencies that come from and all applications. To start using it, just clone it, build it (mkdir build, cd build, cmake ../, make, make install, or sudo make install if you aren't the owner of /usr/local yet) and you can run kdesrc-build from any path your terminal happens to be in. The one thing needed is a .kdesrc-buildrc file to tell it what you want to build, where you want it installed to, which build options etc. you want. This is pretty straightforward though and most of the kde stack is in include files you can add from your .kdesrc-buildrc itself. Mine looks like this:

# Adjust all these settings at will


 qtdir /usr
 # qtdir /home/jeremy/devel/kde/src/qt5bulid/qtbase
 source-dir /home/jeremy/devel/kde/src
 build-dir /home/jeremy/devel/kde/build
 kdedir /usr/local

 git-repository-base kde-projects kde:

 cxxflags -pipe -DQT_STRICT_ITERATORS -DQURL_NO_CAST_FROM_STRING -DQT_NO_HTTP -DQT_NO_FTP -Wformat -Werror=return-type -Wno-variadic-macros -Wlogical-op
 # WARNING: opensuse users need -DLIB_SUFFIX=64 here, as long as FindKDE4Internal.cmake is used
 #          if you're using a distro without "lib64", remove the option.
 # cmake-options -DKDE4_BUILD_TESTS=TRUE -DLIB_SUFFIX=64

 make-options -j8
 #install-session-driver true
 branch-group kf5-qt5

end global

include devel/kde/src/extragear/utils/kdesrc-build/kf5-qt5-build-include

I go back and forth sometimes between distro packaged qt (in /usr) and my own built qt from git (in the other path) so I uncomment the one I want to use in those first few lines.

It's pretty simple and short, set 9 variables, include the kf5-qt5-build-include file and we're good to go. So for me, kdesrc-build with no arguments builds and installs many different kde applications, with their sources nicely organized under ~/devel/kde/src and their build folders easy to delete if needed in ~/devel/kde/build and installs into /usr/local where I have my XDG_* variables set to find applications, libraries, data, default configurations, etc. Also if some part of the workspace becomes deprecated and I need to remove old libraries, .desktop files, and such I can safely (from a terminal, not within X) nuke /usr/local/* and rerun kdesrc-build to rebuild everything that's current.

kdesrc-build frameworks - builds all the parts of kf5 itself.
kdesrc-build kdeedu - builds all the libraries and applications that have been ported to qt5/kf5 from kdeedu.
kdesrc-build --no-src kanagram - builds kanagram with my local changes for testing before committing the next feature, also useful to test patches from reviewboard (download, patch, kdesrc-build --no-src foo to build/install, run to test).
kdesrc-build --no-src khangman - builds whatever I've got checked out in kde/kdeedu/khangman at the time (currently an almost complete gsoc student's qml ui of khangman from his branch).
kdesrc-build --refresh-build - rebuilds everything with clean build folders using the cmake options from your .kdesrc-buildrc file, this is useful if you change these options and want to test them
kdesrc-build --refresh-build --no-src foo - rebuilds everything with a clean build and doesn't do any git updates, only tries to build what's on your local clone, this is useful when porting applications to kf5/qt5 to make sure cmake is reran when trying a build of local changes.

A good thing to know is that errors are all logged, and you can check them simply by checking source-dir/log/latest/foo/error.log (which symlinks to cmake.log, build.log, or install.log, depending where the error was).

One more nice thing, since kdesrc-build uses kde-project-metadata it can guess some projects from their location on So even if I don't have skrooge or some other extragear application in my .kdesrc-buildrc file or it's not in the included kf5-qt5-build-include file or whatnot, kdesrc-build skrooge will guess where skrooge comes from and clone it to the proper place in the heirarchy and build it.

In summary, kdesrc-build is useful for what it was created for, building the kde stack of software with your preferences.

by Jeremy Whiting ( at November 09, 2015 02:22 PM

October 22, 2015

Xavier Claessens

git-phab: more updates

git-phab got tons of improvements last days, thanks to Thibault Saunier.

  • New “browse” command to open tasks and differentials links. With no arguments it opens the task if your current branch is in the form “TXXX-description”. Otherwise it can be passed a list of task ids, differential ids or commits. For example:

    git phab browse T123 D345 HEAD bc287c9

  • It uses argcomplete for bash completion, but seems to work only if writing “git-phab <command>” and not “git phab <command>”. Ideas how to improve this?
  • The “attach” command can now create a new task on phabricator when current branch is not associated to an existing one.
  • The “attach” command now proposes to create a branch with a name in the form “TXXX-description” if current branch isn’t in that format yet. That makes easier to update the patchset later since the task will be guessed from the branch name if no –task argument is passed.
  • It is now mandatory to have a “.arcconfig” file in your git repository with at least “phabricator.uri” and “project” keys. The “attach” command will use that info to set the project on new differential/tasks.
  • The default commit range for “attach” and “log” commands is now from the head of the remote tracking branch to HEAD. Instead of being hard-coded to “origin/master..HEAD”.
  • If your “.arcconfig” contains a “default-reviewers” key, it will be used when –reviewers argument is not specified.
  • More ideas for future improvements has been filled in our backlog. Help and feedback is welcome.

by xclaesse at October 22, 2015 07:21 PM

October 19, 2015

Xavier Claessens

git-phab: updates

Many improvements on git-phab lately:

  • It is now hosted on
  • “git phab attach” now also push the branch and links it on the Maniphest (only supported on fdo and collabora’s phabricator atm). Reviewers/testers can then easily retrieve the branch with “git phab fetch Txxx”.
  • Fixed compatibility with phabricator’s fragile commit msg parser.
  • Check the backlog, give feedback, contribute :-)

by xclaesse at October 19, 2015 02:39 PM