Planet Collabora

March 30, 2015

Guillaume Desmottes

Tracking the reference count of a GstMiniObject using gdb

As part of my work at Collabora, I'm currently adding Valgrind support to the awesome gst-validate tool. The ultimate goal is to run our hundreds of GStreamer tests inside Valgrind as part of the existing QA infrastructure to automatically track memory related regressions (invalid reads, leaks, etc).

Most of the gst-validate changes have already landed and can be very easily used by passing the --valgrind argument to gst-validate-launcher. I'm now focusing on making sure most of our existing tests are passing with Valgrind which means doing quite a lot of memory leaks debugging (everyone love doing those right?).

A lot of GStreamer types are based on GstMiniObject instead of the usual GObject. It makes a lot of sense from a performance pov but can make tracking ref count issues harder as we can't rely on tools such as RefDbg or gobject-list.

I was tracking one GstMiniObject leak today and was looking for a way to get a trace each time its reference is modified. We can use GST_DEBUG="GST_REFCOUNTING:9" to get logs each time the object is reffed/unreffed but I was actually interested in the full stack trace. With to the help of the French gang (kudos to Dodji, Bastien and Christophe!) I managed to do so using this good old gdb.

First thing is to break when the object you want to track is created, you can either do this by using in gdb b mysource:line or just add a G_BREAKPOINT() in your source code. Start your app with gdb as usual then uses:

set logging on
set pagination off

The ouput can be pretty long so this will ensure that logs are saved to a file (gdb.txt by default) and that gdb won't bother you asking for confirmation before printing ouput. Now start your app (run) and once it has paused use the following command:

watch -location ((GstMiniObject*)caps)->refcount

caps is the name of the instance of the object I want to track, as defined in the scope where I installed my breakpoint; update it to match yours. This command adds a watchpoint on the refcount of the object, that means gdb will now stop each time its value is modified. The -location option ensures that gdb watches the memory associated with the expression, not using it would limit us to the local scope of the variable.

Now we want to display a backtrace each time gdb pauses when this watchpoint is hit. This is done using commands:

commands
bt
continue
end

All the gdb instructions between commands and end will be automatically executed each time gdb pauses because of the watchpoint we just defined. In our case we first want to display a stack trace (bt) and then continue the execution of the program.

We are now all set, we just have to ask gdb to resume the normal execution of the program we are debugging:

continue

This should generate a gdb.txt log file containing something like:

Old value = 1
New value = 2
gst_mini_object_ref (mini_object=0x7ffff001c8f0) at gstminiobject.c:362
362	  return mini_object;
#0  0x00007ffff6f384ed in gst_mini_object_ref (mini_object=0x7ffff001c8f0) at gstminiobject.c:362
#1  0x00007ffff6f38b00 in gst_mini_object_replace (olddata=0x7ffff67f0c58, newdata=0x7ffff001c8f0) at gstminiobject.c:501
#2  0x00007ffff72573ed in gst_caps_replace (old_caps=0x7ffff67f0c58, new_caps=0x7ffff001c8f0) at ../../../gst/gstcaps.h:312
#3  0x00007ffff72578fa in helper_find_suggest (data=0x7ffff67f0c30, probability=GST_TYPE_FIND_MAXIMUM, caps=0x7ffff001c8f0) at gsttypefindhelper.c:230
#4  0x00007ffff6f7d606 in gst_type_find_suggest_simple (find=0x7ffff67f0bf0, probability=100, media_type=0x7ffff5de8a66 "video/mpegts", fieldname=0x7ffff5de8a4e "systemstream") at gsttypefind.c:197
#5  0x00007ffff5ddbec3 in mpeg_ts_type_find (tf=0x7ffff67f0bf0, unused=<optimized out>) at gsttypefindfunctions.c:2381
#6  0x00007ffff6f7dbc7 in gst_type_find_factory_call_function (factory=0x6dbbc0 [GstTypeFindFactory], find=0x7ffff67f0bf0) at gsttypefindfactory.c:215
#7  0x00007ffff7257d83 in gst_type_find_helper_get_range (obj=0x7e8280 [GstProxyPad], parent=0x7d0500 [GstGhostPad], func=0x7ffff6f2aa37 <gst_proxy_pad_getrange_default>, size=10420224, extension=0x7ffff00010e0 "MTS", prob=0x7ffff67f0d04) at gsttypefindhelper.c:355
#8  0x00007ffff683fd43 in gst_type_find_element_loop (pad=0x7e47d0 [GstPad]) at gsttypefindelement.c:1064
#9  0x00007ffff6f79895 in gst_task_func (task=0x7ef050 [GstTask]) at gsttask.c:331
#10 0x00007ffff6f7a971 in default_func (tdata=0x61ac70, pool=0x619910 [GstTaskPool]) at gsttaskpool.c:68
#11 0x0000003ebb070d68 in g_thread_pool_thread_proxy (data=<optimized out>) at gthreadpool.c:307
#12 0x0000003ebb0703d5 in g_thread_proxy (data=0x7ceca0) at gthread.c:764
#13 0x0000003eb880752a in start_thread (arg=0x7ffff67f1700) at pthread_create.c:310
#14 0x0000003eb850022d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
Hardware watchpoint 1: -location ((GstMiniObject*)caps)->refcount

Old value = 2
New value = 1
0x00007ffff6f388b6 in gst_mini_object_unref (mini_object=0x7ffff001c8f0) at gstminiobject.c:442
442	  if (G_UNLIKELY (g_atomic_int_dec_and_test (&mini_object->refcount))) {
#0  0x00007ffff6f388b6 in gst_mini_object_unref (mini_object=0x7ffff001c8f0) at gstminiobject.c:442
#1  0x00007ffff6f7d027 in gst_caps_unref (caps=0x7ffff001c8f0) at ../gst/gstcaps.h:230
#2  0x00007ffff6f7d615 in gst_type_find_suggest_simple (find=0x7ffff67f0bf0, probability=100, media_type=0x7ffff5de8a66 "video/mpegts", fieldname=0x7ffff5de8a4e "systemstream") at gsttypefind.c:198
#3  0x00007ffff5ddbec3 in mpeg_ts_type_find (tf=0x7ffff67f0bf0, unused=<optimized out>) at gsttypefindfunctions.c:2381
#4  0x00007ffff6f7dbc7 in gst_type_find_factory_call_function (factory=0x6dbbc0 [GstTypeFindFactory], find=0x7ffff67f0bf0) at gsttypefindfactory.c:215
#5  0x00007ffff7257d83 in gst_type_find_helper_get_range (obj=0x7e8280 [GstProxyPad], parent=0x7d0500 [GstGhostPad], func=0x7ffff6f2aa37 <gst_proxy_pad_getrange_default>, size=10420224, extension=0x7ffff00010e0 "MTS", prob=0x7ffff67f0d04) at gsttypefindhelper.c:355
#6  0x00007ffff683fd43 in gst_type_find_element_loop (pad=0x7e47d0 [GstPad]) at gsttypefindelement.c:1064
#7  0x00007ffff6f79895 in gst_task_func (task=0x7ef050 [GstTask]) at gsttask.c:331
#8  0x00007ffff6f7a971 in default_func (tdata=0x61ac70, pool=0x619910 [GstTaskPool]) at gsttaskpool.c:68
#9  0x0000003ebb070d68 in g_thread_pool_thread_proxy (data=<optimized out>) at gthreadpool.c:307
#10 0x0000003ebb0703d5 in g_thread_proxy (data=0x7ceca0) at gthread.c:764
#11 0x0000003eb880752a in start_thread (arg=0x7ffff67f1700) at pthread_create.c:310
#12 0x0000003eb850022d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
Hardware watchpoint 1: -location ((GstMiniObject*)caps)->refcount

If you see this kind of error in your log

Error evaluating expression for watchpoint 1 value has been optimized out

this means you may have to rebuild your application and/or its libraries without any optimization. This is pretty easy with autotools:

make clean
CFLAGS="-g3 -ggdb3 -O0" make

Now all you need is to grab a good cup of tea and start digging through this log to find your leak. Good fun!

by Guillaume Desmottes at March 30, 2015 05:56 PM

March 23, 2015

Jeremy Whiting

KRecipes 2.1

The KDE Gardening team has at last finished the Love project KRecipes with the 2.1 release which can be found here: http://download.kde.org/stable/krecipes/2.1.0/src/krecipes-2.1.0.tar.xz.mirrorlist

Enjoy.

by Jeremy Whiting (noreply@blogger.com) at March 23, 2015 10:02 PM

March 20, 2015

Pekka Paalanen

Weston repaint scheduling

Now that Presentation feedback has finally landed in Weston (feedback, flags), people are starting to pay attention to the output timings as now you can better measure them. I have seen a couple of complaints already that Weston has an extra frame of latency, and this is true. I also have a patch series to fix it that I am going to propose.

To explain how the patch series affects Weston's repaint loop, I made some JSON-timeline recordings before and after, and produced some graphs with Wesgr. Here I will explain how the repaint loop works timing-wise.

Original post Feb 11, 2015.
Update Mar 20, 2015: the patches have landed in Weston.


The old algorithm

The old repaint scheduling algorithm in Weston repaints immediately on receiving the pageflip completion event. This maximizes the time available for the compositor itself to repaint, but it also means that clients can never hit the very next vblank / pageflip.

Figure 1. The old algorithm, the client paints as response to frame callbacks.

Frame callback events are sent at the "post repaint" step. This gives clients almost a full frame's time to draw and send their content before the compositor goes to "begin repaint" again. In Figure 1. you see, that if a client paints extremely fast, the latency to screen is almost two frame periods. The frame latency can never be less than one frame period, because the compositor samples the surface contents (the "repaint flush" point) immediately after the previous vblank.

Figure 2. The old algorithm, the client paints as response to Presentation feedback events.

While frame callback driven clients still get to the full frame rate, the situation is worse if the client painting is driven by presentation_feedback.presented events. The intent is to draw and show a new frame as soon as the old frame was shown. Because Weston starts repaint immediately on the pageflip completion, which is essentially the same time when Presentation feedback is sent, the client cannot hit the repaint of this frame and gets postponed to the next. This is the same two frame latency as with frame callbacks, but here the framerate is halved because the client waits for the frame to be actually shown before continuing, as is evident in Figure 2.

Figure 3. The old algorithm, client posts a frame while the compositor is idle.

Figure 3. shows a less relevant case, where the compositor is idle while a client posts a new frame ("damage commit"). When the compositor is idle graphics-wise (the gray background in the figure), it is not repainting continuously according to the output scanout cycle. To start painting again, Weston waits for an extra vblank first, then repaints, and then the new frame is shown on the next vblank. This is also a 1-2 frame period latency, but it is unrelated to the other two cases, and is not changed by the patches.

The modification to the algorithm

The modification is simple, yet perhaps counter-intuitive at first. We reduce the latency by adding a delay. The "delay before repaint" is in all the figures, and the old algorithm is essentially using a zero delay. The compositor's repaint is delayed so that clients have a chance to post a new frame before the compositor samples the surface contents.

A good amount of delay is a hard question. Too small delay and clients do not have time to act. Too long delay and the compositor itself will be in danger of missing the vblank deadline. I do not know what a good amount is or how to derive it, so I just made it configurable. You can set the repaint window length in milliseconds in weston.ini. The repaint window is the time from starting repaint to the deadline, so the delay is the frame period minus the repaint window. If the repaint window is too long for a frame period, the algorithm will reduce to the old behaviour.

The new algorithm

The following figures are made with a 60 Hz refresh and a 7 millisecond repaint window.

Figure 4. The new algorithm, the client paints as response to frame callback.

When a client paints as response to the frame callback (Figure 4), it still has a whole frame period of time to paint and post the frame. The total latency to screen is a little shorter now, by the length of the delay before compositor's repaint. It is a slight improvement.

Figure 5. The new algorithm, the client paints as response to Presentation feedback.

A significant improvement can be seen in Figure 5. A client that uses the Presentation extension to wait for a frame to be actually shown before painting again is now able to reach the full output frame rate. It just needs to paint and post a new frame during the delay before compositor's repaint. This mode of operation provides the shortest possible latency to screen as the client is able to target the very next vblank. The latency is below one frame period if the deadlines are met.

Discussion

This is a relatively simple change that should reduce display latency, but analyzing how exactly it affects things is not trivial. That is why Wesgr was born.

This change does not really allow clients to wait some additional time before painting to reduce the latency even more, because nothing tells clients when the compositor will repaint exactly. The risk of missing an unknown deadline grows the later a client paints. Would knowing the deadline have practical applications? I'm not sure.

These figures also show the difference between the frame callback and Presentation feedback. When a client's repaint loop is driven by frame callbacks, it maximizes the time available for repainting, which reduces the possibility to miss the deadline. If a client drives its repaint loop by Presentation feedback events, it minimizes the display latency at the cost of increased risk of missing the deadline.

All the above ignores a few things. First, we assume that the time of display is the point of vblank which starts to scan out the new frame. Scanning out a frame actually takes most of the frame period, it's not instantaneous. Going deeper, updating the framebuffer during scanout period instead of vblank could allow reducing latency even more, but the matter becomes complicated and even somewhat subjective. I hear some people prefer tearing to reduce the latency further. Second, we also ignore any fencing issues that might come up in practise. If a client submits a GPU job that takes a long while, there is a good chance it will cause everything to miss a deadline or more.

As usual, this work and most of the development of JSON-timeline and Wesgr were sponsored by Collabora.

PS. Latency and timing issues are nothing new. Owen Taylor has several excellent posts on related subjects in his blog.

by pq (noreply@blogger.com) at March 20, 2015 10:25 AM

March 03, 2015

Jeremy Whiting

QtSpeech progress

This week some changes in knotifications/knotifyconfig/kanagram/okular are in the works. The kanagram changes are already on master, the others are in review. Those changes are bringing back the use of text to speech features via the new QtSpeech module. Some have asked what the status of QtSpeech is, so I thought I'd share a bit about it here.

Frederik Gladhorn created the QtTextToSpeech module a while ago as a test to see how feasible it would be to wrap all the platforms Qt is supported on's TTS APIs in one easy to use Qt API. This turned out to be a great idea in my opinion. The predecessor to QtSpeech in KDE applications was Jovie, formerly known as kttsd. While it worked for the most part it required a daemon to be running which spoke with different synthesizers (originally) then was modified to use speech-dispatcher directly instead (when it was renamed to Jovie). QtSpeech on the other hand is a library. If you want to use it, you link to it in your application, create a QTextToSpeech object, and pass any text to speak to it's "say" method. No D-Bus connection required, no daemon required, just a small, light library that wraps the native platform TTS API directly.

As for the status of QtSpeech, I'm afraid it's not quite ready for prime time. It wont likely get added to Qt 5.5 which has feature freeze next Monday. It is however ready to be tested, improved, etc. on each platform. Most of it's API is implemented completely on linux, The basic API (saying text) is implemented on Android, Windows and Mac OS X. Patches are on gerrit to implement the rest of the API (getting available voices, locales, setting the voice) on OS X and will be written soon for Windows also. I plan to spend a bit of time on it each week so it will be ready for release with Qt 5.6 and I hope anyone else interested will join us.

More information about QtSpeech can be found here http://qt-project.org/wiki/QtSpeech. I hope this update has been helpful.

P.S. Here's a work in progress screenshot of the example widget Frederik created which is inside the QtSpeech git repository as it appears on OS X.


Edit: The wiki has been moved apparently. It's now found here: https://wiki.qt.io/index.php?title=QtSpeech

by Jeremy Whiting (noreply@blogger.com) at March 03, 2015 02:49 AM

February 28, 2015

Jeremy Whiting

node.js experience wanted

Hello all,

If there's anyone in the community, or even just reading this blog, that has experience with node.js and a bit of time I would like to recruit you for a special task. The task is to get bodega-server (and maybe the webapp or admin client too if you're so inclined) to actually work again. It worked at some point in the past year from what I hear, but currently it just spews 404 error pages for any api call it gets. I gather that this is because the nodes that it uses have changed their api since it was written. My time is limited and I've poked it enough to not give warnings at runtime anymore, but someone that really knows the ins and outs of node.js could probably fix it much faster than I so I am asking for such a brave soul to come forward and get the next generation software/data/"stuff" distribution system to do so. I know you're out there and you're considering, stop considering, hop on #kde-devel or #kde-www or anywhere on freenode and find me or others trying to get this going. Or just look at the code itself here and throw me some pointers.

I can't promise much except fame, thanks, admiration of your peers, etc. but hopefully that's enough.

P.S. this couldn't happen soon enough, ocs/attica, knewstuff, and opendesktop/kde-look, etc. are really showing their age. Having bodega working would make a lot of awesome things possible again.

by Jeremy Whiting (noreply@blogger.com) at February 28, 2015 10:28 PM

February 16, 2015

Philip Withnall

D-Bus API design guidelines

D-Bus has just gained some upstream guidelines on how to write APIs which best use its features and follow its conventions. Hopefully they draw together all the best practices from the various articles which have been published about writing D-Bus APIs in the past. If anybody has suggestions, criticism or feedback about them, or ideas for extra guidelines to include, please get in touch!

by Philip Withnall at February 16, 2015 05:53 PM

February 15, 2015

Javier Martinez Canillas

Collabora contributions to Linux kernel 3.19

Linux 3.19 was released last week and this version again contains contributions made by Collabora engineers as a part of our current projects.

In total 60 patches were contributed to the 3.19 release. These were for:

  • Work in the Intel i915 DRM driver to support atomic plane updates.
  • Fixes and removal of unnecessary layers in the Exynos DRM driver as a preparation to add atomic mode settings support.
  • Add support to the regulator framework to define regulators initial and suspend operating modes.
  • Changes to the max77802 regulator driver to support regulator operating modes changes.
  • Fix the Real-Time-Clock in the Snow, Peach Pit and Pi Chromebooks.
  • Enable kernel config options to have display panel working on Exynos boards.
  • Allow the regulator drivers to be enabled and disabled when Exynos system enters and leave suspend states.
  • Many cleanup and fixes to the common clock framework and clock drivers as a preparation for the per-user clock API change.

Following is the complete list of patches merged in this kernel release:


by Javier Martinez Canillas at February 15, 2015 10:16 AM

February 11, 2015

Philip Withnall

GNOME programming guidelines: the rise of gnome-devel-docs

Codethink logo

Before I start, an addendum to my last post about the DX hackfest: I wish to thank Codethink for sponsoring dinner one night of the event. I forgot to include that in my original post, sorry! Thanks again to all the companies who let employees attend: Endless, Codethink, Canonical and Red Hat.

tl;dr: Check out the new GNOME Programming Guidelines and file bugs in Bugzilla. Though it looks like a cronjob may be taking the docs offline occasionally. This will be fixed.

Now, to some of the results of the hackfest. In the last week or so, I’ve been working on expanding the GNOME programming guidelines, upstreaming various bits of documentation which Collabora have been writing for a customer who is using the GNOME stack in a large project. The guidelines were originally written in the early days of GNOME by Federico, Miguel and Morten; Federico updated them in 2013, and now they’ve been expanded again.

It looks like these guidelines can fill one of the gaps we currently have in documentation, where we need to recommend best practices and give tutorial-style examples, but gtk-doc–generated API manuals are not the right place. For example, the new guidelines include recommendations for making libraries parallel-installable (based off Havoc’s original article, with permission); or recommendations for choosing where to store data (in GSettings, a serialised GVariant store, or a full-on GOM/SQLite database?). The guidelines are intended to be useful to all developers, although inherently need to target newer developers more, so may simplify a few things.

I’ve still got some ideas for things to add. For example, I need to rework some of my blog posts about GMainContext into an article, since we should be documenting before blogging. Other ideas are very welcome, as is criticism, feedback and improvements: please file a bug against gnome-devel-docs. Thanks to the documentation team for their help and reviews!

by Philip Withnall at February 11, 2015 03:22 PM

February 03, 2015

Sjoerd Simons

Debian Jessie on Raspberry Pi 2

Apart from being somewhat slow, one of the downsides of the original Raspberry Pi SoC was that it had an old ARM11 core which implements the ARMv6 architecture. This was particularly unfortunate as most common distributions (Debian, Ubuntu, Fedora, etc) standardized on the ARMv7-A architecture as a minimum for their ARM hardfloat ports. Which is one of the reasons for Raspbian and the various other RPI specific distributions.

Happily, with the new Raspberry Pi 2 using Cortex-A7 Cores (which implement the ARMv7-A architecture) this issue is out of the way, which means that a a standard Debian hardfloat userland will run just fine. So the obvious first thing to do when an RPI 2 appeared on my desk was to put together a quick Debian Jessie image for it.

The result of which can be found at: https://images.collabora.co.uk/rpi2/

Login as root with password debian (Obviously do change the password and create a normal user after booting). The image is 3G, so should fit on any SD card marketed as 4G or bigger. Using bmap-tools for flashing is recommended, otherwise you'll be waiting for 2.5G of zeros to be written to the card, which tends to be rather boring. Note that the image is really basic and will just get you to a login prompt on either serial or hdmi, batteries are very much not included, but can be apt-getted :).

Technically, this image is simply a Debian Jessie debootstrap with a extra packages for hardware support. Unlike Raspbian the first partition (which contains the firmware & kernel files to boot the system) is mounted on /boot/firmware rather then on /boot. This is because the VideoCore expects the first partition to be a FAT filesystem, but mounting FAT on /boot really doesn't work right on Debian systems as it contains files managed by dpkg (e.g. the kernel package) which requires a POSIX compatible filesystem. Essentially the same reason why Debian is using /boot/efi for the ESP partition on Intel systems rather the mounting it on /boot directly.

For reference, the RPI2 specific packages in this image are from https://repositories.collabora.co.uk/debian/ in the jessie distribution and rpi2 component (this repository is enabled by default on the image). The relevant packages there are:

  • linux: Current 3.18 based package from Debian experimental (3.18.5-1~exp1 at the time of this writing) with a stack of patches on top from the raspberrypi github repository and tweaked to build an rpi2 flavour as the patchset isn't multiplatform capable :(
  • raspberrypi-firmware-nokernel: Firmware package and misc libraries packages taken from Raspbian, with a slight tweak to install in /boot/firmware rather then /boot.
  • flash-kernel: Current flash-kernel package from debian experimental, with a small addition to detect the RPI 2 and "flash" the kernel to /boot/firmware/kernel7.img (which is what the GPU will try to boot on this board).

For the future, it would be nice to see the Raspberry Pi 2 support out of the box on Debian. For that to happen, the most important thing would be to have some mainline kernel support for this board (supporting multiplatform!) so it can be build as part of debians armmp kernel flavour. And ideally, having the firmware load a bootloader (such as u-boot) rather than a kernel directly to allow for a much more flexible boot sequence and support for using an initramfs (u-boot has some support for the original Raspberry Pi, so adding Raspberry Pi 2 support should hopefully not be too tricky)

by Sjoerd Simons at February 03, 2015 12:50 PM

January 29, 2015

Philip Withnall

DX hackfest 2015: day 5

Day 5, and the DX and docs hackfest in Collabora HQ, Cambridge has drawn to a close. It’s been great to have everyone here, and there have been a lot of in-depth discussions over the last few days about the details of app sandboxing, runtimes, Builder integration with various new services, the development of an IDE abstraction layer, approaches for making build systems accessible to Builder, lots of new things to statically analyse, and some fairly fundamental additions to GLib in the form of G_DECLARE_[FINAL|DERIVABLE]_TYPE and general-purpose reference counted memory areas. Whew! We even had a fleeting visit by Richard Hughes to discuss packaging issues for apps.

For all the details, see the blogs by Ryan, Cosimo, Ryan again, Alberto, Christian and Emmanuele.

I can’t do justice to the work of the docs team, who put in consistent, solid effort throughout the hackfest. See the blogs by Petr, Bastian, Kat and Jim for all the details. They even left me with a seemingly endless supply of Mallard balls to throw around the office!

Dave and I have spent a little while working on further deprecating gnome-common. More details to come once the migration guide is finished.

Thank you to Collabora for hosting, Codethink for sponsoring a dinner, and Endless, Codethink (again), Canonical and Red Hat for letting people attend; and thank you to the GNOME Foundation for sponsoring some of the attendees. It would not be a hackfest without the hackers!

Collabora logoCodethink logoEndless Mobile logoRed Hat logoCanonical logo

by Philip Withnall at January 29, 2015 07:27 PM

January 26, 2015

Philip Withnall

DX hackfest 2015: day 1

It’s a sunny Sunday here in Cambridge, UK, and GNOMErs have been arriving from far and wide for the first day of the 2015 developer experience hackfest. This is a week-long event, co-hosted with the winter docs hackfest (which Kat has promised to blog about!) in the Collabora offices.

Today was a bit of a slow start, since people were still arriving throughout the day. Regardless, there have been various discussions, with Ryan, Emmanuele and Christian discussing performance improvements in GLib, Christian and Allan plotting various different approaches to new UI in Builder, Cosimo and Carlos silently plugging away at GTK+, and Emmanuele muttering something about GProperty now and then.

Tomorrow, I hope we can flesh out some of these initial discussions a bit more and get some roadmapping down for GLib development for the next year, amongst other things. I am certain that Builder will feature heavily in discussions too, and apps and sandboxing, now that Alex has arrived.

I’ve spent a little time finishing off and releasing Walbottle, a small library and set of utilities I’ve been working on to implement JSON Schema, which is the equivalent of XML Schema or RELAX-NG, but for JSON files. It allows you to validate JSON instances against a schema, to validate schemas themselves and, unusually, to automatically generate parser unit tests from a schema. That way, you can automatically test json-glib–based JsonReader/JsonParser code, just by passing the JSON schema to Walbottle’s json-schema-generate utility.

It’s still a young project, but should be complete enough to be useful in testing JSON code. Please let me know of any bugs or missing features!

Tomorrow, I plan to dive back in to static analysis of GObject code with Tartan

by Philip Withnall at January 26, 2015 12:12 AM

December 23, 2014

Philip Withnall

Automatically Valgrinding code with AX_VALGRIND_CHECK

tl;dr: For `make check` with Valgrind support, add AX_VALGRIND_CHECK to configure.ac and @VALGRIND_CHECK_RULES@ to tests/Makefile.am. Use memcheck, helgrind, drd and sgcheck.

Once you’ve written a unit test suite with near-100% coverage, you can validate software with a lot more confidence than before, since any effort you put into validation is working across the whole code base.1 It’s quite common to check for memory leaks with Valgrind’s memcheck tool — but Valgrind has several additional tools which are useful in production: helgrind, drd and sgcheck.2 All of these can be run over unit tests.

How can you run a test suite under these tools? The normal method is some kind of hard-to-remember rune involving libtool, and either something about TESTS_ENVIRONMENT or some kind of LOG_COMPILER (but why would I want to compile my test logs?). For that reason, I’ve written an autoconf macro which abstracts it all a bit: AX_VALGRIND_CHECK. It adds a check-valgrind target to your makefile which handles running the `make check` tests under all four Valgrind tools of interest. Various ancillary variables (such as VALGRIND_SUPPRESSIONS_FILES or VALGRIND_FLAGS) allow the Valgrind options to be changed. `make check-valgrind` will return an error exit code if any problems are found, with the idea that the target can be used for automated testing and continuous integration.

To use it, either add a dependency on autoconf-archive to your project, or copy the ax_valgrind_check.m4 macro in-tree (and add it to EXTRA_DIST), then follow the instructions at the top of the file for adding it to configure.ac and tests/Makefile.am.

With this macro, I’ve tried to bring together existing makefile snippets which exist in various projects to produce one macro which can be reused everywhere. Of course, I’ve probably missed something – some feature or specific use case which someone has – so if anyone has any suggestions for improvement, please let me know or submit a patch to the autoconf-archive! For example, at the moment the macro requires automake’s parallel test harness, and does not support older versions of automake.

Thanks to my employer, Collabora, for enabling me to work on (hopefully useful) stuff like this!


  1. Normal caveats about the combinatorial complexity of dynamic testing apply. 

  2. Although currently the threading tools may not work with GLib threads, due to its use of futexes rather than pthreads

by Philip Withnall at December 23, 2014 07:49 PM

December 19, 2014

Philip Withnall

Use g_set_object() to simplify (and safetyify) GObject property setters

tl;dr: Use g_set_object() to conveniently update owned object pointers in property setters (and elsewhere); see the bottom of the post for an example.

A little while ago, GLib gained a useful function called g_clear_object(), which clears an owned object pointer (a pointer which owns a reference to a GObject). GLib also just gained a function called g_set_object(), which works similarly but can either clear an object pointer or update it to point to a new object.

Why is this valuable? It saves a few lines of code each time an object pointer is updated, which isn’t much in itself. However, one thing it gets right is the order of reference counting operations, which is a common mistake in property setters. Instead of:

/* This is bad code. */
if (object_ptr != NULL)
    g_object_unref (object_ptr);
if (new_object != NULL)
    g_object_ref (new_object);
object_ptr = new_object;

you should always do:

/* This is better code. */
if (new_object != NULL)
    g_object_ref (new_object);
if (object_ptr != NULL)
    g_object_unref (object_ptr);
object_ptr = new_object;

because otherwise, if (new_object == object_ptr) (or if the objects have some other ownership relationship) and the object only has one reference left, object_ptr will end up pointing to a finalised GObject (and g_object_ref() will be called on a finalised GObject too, which it really won’t like).

So how does g_set_object() help? We can now do:

g_set_object (&object_ptr, new_object);

which takes care of all the reference counting, and allows new_object to be NULL. &object_ptr must not be NULL. If you’re worried about performance, never fear. g_set_object() is a static inline function, so shouldn’t adversely affect your code size.

Even better, the return value of g_set_pointer() indicates whether the value changed, so we can conveniently base GObject::notify emissions off it:

/* This is how all GObject property setters should look in future. */
if (g_set_object (&priv->object_ptr, new_object))
    g_object_notify (self, "object-ptr");

Hopefully this will make property setters (and other object updates) simpler in new code.

by Philip Withnall at December 19, 2014 10:27 AM

December 18, 2014

Guillaume Desmottes

Kernel hacking workshop

As part of our "community" program at Collabora, I've had the chance to attend to a workshop on kernel hacking at UrLab (the ULB hackerspace). I never touched any part of the kernel and always saw it as a scary thing for hardcore hackers wearing huge beards, so this was a great opportunity to demystify the beast.

We learned about how to create and build modules, interact with userspace using the /sys pseudo-filesystem and some simple tasks with the kernel internal library (memory management, linked lists, etc). The second part was about the old school /dev system and how to implement a character device.

I also discovered lxr.free-electrons.com which is a great tool for browsing and find your way through a huge code base. It's definitely the kind of tool I'd consider re-using for other projects.

So a very a cool experience, I'm still not seeing myself submitting kernel patches any time soon but may consider trying to implement a simple driver or something if I ever need to. Thanks a lot to UrLab for hosting the event, to Collabora for letting me attend and of course to Hastake who did a great job explaining all this and providing fun exercises (I had to reboot only 3 times! But yeah, next time I'll use a VM :) )

kernel-workshop.jpg Club Mate, kernel hacking and bulgur

by Guillaume Desmottes at December 18, 2014 04:10 PM

Jeremy Whiting

kdesrc-build is a very useful tool, here's why

I've been thinking for some time about writing a post about my favorite tool for building, rebuilding, testing, fixing random parts of kde software and how I use it (many times a day, depending on the situation).

For anyone that doesn't know, kdesrc-build is a script, written in perl, it lives in extragear/utils/kdesrc-build in the kde project heirarchy and can be cloned from kde:kdesrc-build if you've got your ~/.gitconfig as follows (if you don't you should add it, go add it now, I'll wait):


[url "git://anongit.kde.org/"]
       insteadOf = kde:
[url "git@git.kde.org:"]
       pushInsteadOf = kde:
kdesrc-build is very useful in that running it with no arguments it will build all of your kde stack. This includes all the frameworks (including Qt if you want it to), all library dependencies that come from git.kde.org and all applications. To start using it, just clone it, build it (mkdir build, cd build, cmake ../, make, make install, or sudo make install if you aren't the owner of /usr/local yet) and you can run kdesrc-build from any path your terminal happens to be in. The one thing needed is a .kdesrc-buildrc file to tell it what you want to build, where you want it installed to, which build options etc. you want. This is pretty straightforward though and most of the kde stack is in include files you can add from your .kdesrc-buildrc itself. Mine looks like this:

# Adjust all these settings at will

global


 qtdir /usr
 # qtdir /home/jeremy/devel/kde/src/qt5bulid/qtbase
 source-dir /home/jeremy/devel/kde/src
 build-dir /home/jeremy/devel/kde/build
 kdedir /usr/local

 git-repository-base kde-projects kde:

 cxxflags -pipe -DQT_STRICT_ITERATORS -DQURL_NO_CAST_FROM_STRING -DQT_NO_HTTP -DQT_NO_FTP -Wformat -Werror=return-type -Wno-variadic-macros -Wlogical-op
 # WARNING: opensuse users need -DLIB_SUFFIX=64 here, as long as FindKDE4Internal.cmake is used
 #          if you're using a distro without "lib64", remove the option.
 # cmake-options -DKDE4_BUILD_TESTS=TRUE -DLIB_SUFFIX=64
 cmake-options -DBUILD_TESTING=TRUE -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_LIBDIR=lib

 make-options -j8
 #install-session-driver true
 branch-group kf5-qt5

end global

include devel/kde/src/extragear/utils/kdesrc-build/kf5-qt5-build-include

I go back and forth sometimes between distro packaged qt (in /usr) and my own built qt from git (in the other path) so I uncomment the one I want to use in those first few lines.

It's pretty simple and short, set 9 variables, include the kf5-qt5-build-include file and we're good to go. So for me, kdesrc-build with no arguments builds and installs many different kde applications, with their sources nicely organized under ~/devel/kde/src and their build folders easy to delete if needed in ~/devel/kde/build and installs into /usr/local where I have my XDG_* variables set to find applications, libraries, data, default configurations, etc. Also if some part of the workspace becomes deprecated and I need to remove old libraries, .desktop files, and such I can safely (from a terminal, not within X) nuke /usr/local/* and rerun kdesrc-build to rebuild everything that's current.

kdesrc-build frameworks - builds all the parts of kf5 itself.
kdesrc-build kdeedu - builds all the libraries and applications that have been ported to qt5/kf5 from kdeedu.
kdesrc-build --no-src kanagram - builds kanagram with my local changes for testing before committing the next feature, also useful to test patches from reviewboard (download, patch, kdesrc-build --no-src foo to build/install, run to test).
kdesrc-build --no-src khangman - builds whatever I've got checked out in kde/kdeedu/khangman at the time (currently an almost complete gsoc student's qml ui of khangman from his branch).
kdesrc-build --refresh-build - rebuilds everything with clean build folders using the cmake options from your .kdesrc-buildrc file, this is useful if you change these options and want to test them
kdesrc-build --refresh-build --no-src foo - rebuilds everything with a clean build and doesn't do any git updates, only tries to build what's on your local clone, this is useful when porting applications to kf5/qt5 to make sure cmake is reran when trying a build of local changes.

A good thing to know is that errors are all logged, and you can check them simply by checking source-dir/log/latest/foo/error.log (which symlinks to cmake.log, build.log, or install.log, depending where the error was).

One more nice thing, since kdesrc-build uses kde-project-metadata it can guess some projects from their location on projects.kde.org. So even if I don't have skrooge or some other extragear application in my .kdesrc-buildrc file or it's not in the included kf5-qt5-build-include file or whatnot, kdesrc-build skrooge will guess where skrooge comes from and clone it to the proper place in the heirarchy and build it.

In summary, kdesrc-build is useful for what it was created for, building the kde stack of software with your preferences.

by Jeremy Whiting (noreply@blogger.com) at December 18, 2014 05:08 AM

December 15, 2014

Gustavo Noronha Silva

Web Engines Hackfest 2014

For the 6th year in a row, Igalia has organized a hackfest focused on web engines. The 5 years before this one were actually focused on the GTK+ port of WebKit, but the number of web engines that matter to us as Free Software developers and consultancies has grown, and so has the scope of the hackfest.

It was a very productive and exciting event. It has already been covered by Manuel RegoPhilippe Normand, Sebastian Dröge and Andy Wingo! I am sure more blog posts will pop up. We had Martin Robinson telling us about the new Servo engine that Mozilla has been developing as a proof of concept for both Rust as a language for building big, complex products and for doing layout in parallel. Andy gave us a very good summary of where JS engines are in terms of performance and features. We had talks about CSS grid layouts, TyGL – a GL-powered implementation of the 2D painting backend in WebKit, the new Wayland port, announced by Zan Dobersek, and a lot more.

With help from my colleague ChangSeok OH, I presented a description of how a team at Collabora led by Marco Barisione made the combination of WebKitGTK+ and GNOME’s web browser a pretty good experience for the Raspberry Pi. It took a not so small amount of both pragmatic limitations and hacks to get to a multi-tab browser that can play youtube videos and be quite responsive, but we were very happy with how well WebKitGTK+ worked as a base for that.

One of my main goals for the hackfest was to help drive features that were lingering in the bug tracker for WebKitGTK+. I picked up a patch that had gone through a number of iterations and rewrites: the HTML5 notifications support, and with help from Carlos Garcia, managed to finish it and land it at the last day of the hackfest! It provides new signals that can be used to authorize notifications, show and close them.

To make notifications work in the best case scenario, the only thing that the API user needs to do is handle the permission request, since we provide a default implementation for the show and close signals that uses libnotify if it is available when building WebKitGTK+. Originally our intention was to use GNotification for the default implementation of those signals in WebKitGTK+, but it turned out to be a pain to use for our purposes.

GNotification is tied to GApplication. This allows for some interesting features, like notifications being persistent and able to reactivate the application, but those make no sense in our current use case, although that may change once service workers become a thing. It can also be a bit problematic given we are a library and thus have no GApplication of our own. That was easily overcome by using the default GApplication of the process for notifications, though.

The show stopper for us using GNotification was the way GNOME Shell currently deals with notifications sent using this mechanism. It will look for a .desktop file named after the application ID used to initialize the GApplication instance and reject the notification if it cannot find that. Besides making this a pain to test – our test browser would need a .desktop file to be installed, that would not work for our main API user! The application ID used for all Web instances is org.gnome.Epiphany at the moment, and that is not the same as any of the desktop files used either by the main browser or by the web apps created with it.

For the future we will probably move Epiphany towards this new era, and all users of the WebKitGTK+ API as well, but the strictness of GNOME Shell would hurt the usefulness of our default implementation right now, so we decided to stick to libnotify for the time being.

Other than that, I managed to review a bunch of patches during the hackfest, and took part in many interesting discussions regarding the next steps for GNOME Web and the GTK+ and Wayland ports of WebKit, such as the potential introduction of a threaded compositor, which is pretty exciting. We also tried to have Bastien Nocera as a guest participant for one of our sessions, but it turns out that requires more than a notebook on top of a bench hooked up to   a TV to work well. We could think of something next time ;D.

I’d like to thank Igalia for organizing and sponsoring the event, Collabora for sponsoring and sending ChangSeok and myself over to Spain from far away Brazil and South Korea, and Adobe for also sponsoring the event! Hope to see you all next year!

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

by kov at December 15, 2014 11:20 PM

December 12, 2014

Javier Martinez Canillas

Collabora contributions to the Linux kernel 3.18

Linux 3.18 was released this week and like in previous kernel releases, it contains contributions made by Collabora engineers as a part of the projects involving the Linux kernel.

In total 45 patches were contributed to the 3.18 release. These were for:

  • Cleanups for the Intel i915 DRM driver
  • Various cleanups for the max77686 clock, rtc and regulator drivers.
  • Adding max77802 clock, rtc and regulator drivers.
  • Fixing regmap DT endianess parsing logic.
  • Improving the power model in the Exynos5 Peach Pit and Pi Chromebooks DT.
  • Using the regulator_get_voltage() function to get the mmc OCR mask.
  • Adding max77802 PMIC, ISL29018 sensor and atmel touchpad for Peach boards DT.
  • Setting the correct clock rate for i2c7 in Exynos5 Peach Pit and Pi DT.
  • Enable atmel touchpad, cgroups, sbs battery and atmel touchpad in exynos defconfig.
  • Fixing variable initialization for different regulator drivers.
  • Fixing MFC v5 support in the s5p-mfc driver.
  • Making module autoloading to work for i2c cros-ec-tunnel and cros_ec_keyb drivers.
  • Explicitly configure USB dual role mode as host for Exynos boards.
  • Adding a PM_QOS_MEMORY_BANDWIDTH pm_qos class.
  • Enabling gcov-based kernel profiling for ARM

Following is the complete list of patches merged in this kernel release:


by Javier Martinez Canillas at December 12, 2014 11:51 PM

December 09, 2014

Philip Withnall

A checklist for writing pkg-config files

tl;dr: Use AX_PKG_CHECK_MODULES to split public/private dependencies; use AC_CONFIG_FILES to magically include the API version in the .pc file name.

A few tips for creating a pkg-config file which you will never need to think about maintaining again — because one of the most common problems with pkg-config files is that their dependency lists are years out of date compared to the dependencies checked for in configure.ac. See lower down for some example automake snippets.

  • Include the project’s major API version1 in the pkg-config file name. e.g. libfoo-1.pc rather than libfoo.pc. This will allow parallel installation of two API-incompatible versions of the library if it becomes necessary in future.
  • Split private and public dependencies between Requires and Requires.private. This eliminates over-linking when dynamically linking against the project, since in that case the private dependencies are not needed. This is easily done using the AX_PKG_CHECK_MODULES macro (and perhaps using an upstream macro in future — see pkg-config bug #87154). A dependency is public when its symbols are exposed in public headers installed by your project; it is private otherwise.
  • Include useful ancillary variables, such as the paths to any utilities, directories or daemons which ship with the project. For example, glib-2.0.pc has variables giving the paths for its utilities: glib-genmarshal, gobject-query and glib-mkenums. libosinfo-1.0.pc has variables for its database directories. Ensure the variables use a variable form of ${prefix}, allowing it to be overridden when invoking pkg-config using pkg-config --define-variable=prefix=/some/other/prefix. This allows use of libraries installed in one (read only) prefix from binaries in another, while installing ancillary files (e.g. D-Bus service files) to the second prefix.
  • Substitute in the Name and Version using @PACKAGE_NAME@ and @PACKAGE_VERSION@ so they don’t fall out of sync.
  • Place the .pc.in template in the source code subdirectory for the library it’s for — so if your project produces multiple libraries (or might do in future), the .pc.in files don’t get mixed up at the top level.

Given all those suggestions, here’s a template libmy-project/my-project.pc.in file (updated to incorporate suggestions by Dan Nicholson):

prefix=@prefix@
exec_prefix=@exec_prefix@
libdir=@libdir@
includedir=@includedir@

my_project_utility=my-project-utility-binary-name
my_project_db_dir=@sysconfdir@/my-project/db

Name: @PACKAGE_NAME@
Description: Some brief but informative description
Version: @PACKAGE_VERSION@
Libs: -L${libdir} -lmy-project-@API_VERSION@
Cflags: -I${includedir}/my-project-@API_VERSION@
Requires: @AX_PACKAGE_REQUIRES@
Requires.private: @AX_PACKAGE_REQUIRES_PRIVATE@

And here’s a a few snippets from a template configure.ac:

# Release version
m4_define([package_version_major],[1])
m4_define([package_version_minor],[2])
m4_define([package_version_micro],[3])

# API version
m4_define([api_version],[1])

AC_INIT([my-project],
        [package_version_major.package_version_minor.package_version_micro],
        …)

# Dependencies
PKG_PROG_PKG_CONFIG
PKG_INSTALLDIR

glib_reqs=2.40
gio_reqs=2.42
gthread_reqs=2.40
nice_reqs=0.1.6

# The first list on each line is public; the second is private.
# The AX_PKG_CHECK_MODULES macro substitutes AX_PACKAGE_REQUIRES and
# AX_PACKAGE_REQUIRES_PRIVATE.
AX_PKG_CHECK_MODULES([GLIB],
                     [glib-2.0 >= $glib_reqs gio-2.0 >= $gio_reqs],
                     [gthread-2.0 >= $gthread_reqs])
AX_PKG_CHECK_MODULES([NICE],
                     [nice >= $nice_reqs],
                     [])

AC_SUBST([PACKAGE_VERSION_MAJOR],package_version_major)
AC_SUBST([PACKAGE_VERSION_MINOR],package_version_minor)
AC_SUBST([PACKAGE_VERSION_MICRO],package_version_micro)
AC_SUBST([API_VERSION],api_version)

# Output files
# Rename the template .pc file to include the API version on configure
AC_CONFIG_FILES([
libmy-project/my-project-$API_VERSION.pc:libmy-project/my-project.pc.in
…
],[],
[API_VERSION='$API_VERSION'])
AC_OUTPUT

And finally, the top-level Makefile.am:

# Install the pkg-config file; the directory is set using
# PKG_INSTALLDIR in configure.ac.
pkgconfig_DATA = libmy-project/my-project-$(API_VERSION).pc

Once that’s all built, you’ll end up with an installed my-project-1.pc file containing the following (assuming a prefix of /usr; note that by default autoconf substitutes in references to variables rather than the values themselves, so pkg-config can continue to be used with --define-variable to override the prefix):

prefix=/usr
exec_prefix=${prefix}
libdir=${exec_prefix}/lib
includedir=${prefix}/include

my_project_utility=my-project-utility-binary-name
my_project_db_dir=${prefix}/etc/my-project/db

Name: my-project
Description: Some brief but informative description
Version: 1.2.3
Libs: -L${libdir} -lmy-project-1
Cflags: -I${includedir}/my-project-1
Requires: glib-2.0 >= 2.40 gio-2.0 >= 2.42 nice >= 0.1.6
Requires.private: gthread-2.0 >= 2.40

All code samples in this post are released into the public domain.


  1. Assuming this is the number which will change if backwards-incompatible API/ABI changes are made. 

by Philip Withnall at December 09, 2014 01:41 PM

December 08, 2014

Nick Richards

Google Maps 2014 Redesign

This talk from Google IO 2014 is about a redesign, that of Google Maps - but to me it feels like the process behind a really good design for any sort of complex dynamic system. Lovely to see the working being shown and some really interesting examples of the 'double the hypothesis to get a reaction to see if you're moving in the correct direction' technique as well as the wide variety of locations tested and visualised.

If you liked that then you'll probably love going slightly higher level into a detailed breakdown of the similarities and differences between the Apple and Google Maps Apps on iOS.

by Nick Richards at December 08, 2014 05:13 PM

December 03, 2014

Philip Withnall

AM_INCLUDES doesn’t exist

tl;dr: Don’t use AM_INCLUDES in Makefiles. It does nothing. INCLUDES and AM_CPPFLAGS should be avoided in favour of my_program_name_CPPFLAGS.

Once upon a time, there was an automake variable called INCLUDES. It was used to pass flags to the preprocessor, such as -I flags to specify include paths. All was good.

Then the gentle leaders of automake decided that INCLUDES was not a very generic name, so they deprecated it in favour of the *_CPPFLAGS variables, like the target-specific my_program_name_CPPFLAGS. For those who wanted to specify preprocessor flags for all targets in their Makefile, the AM_CPPFLAGS variable was provided. Due to their kind and wise oversight, the transition between these variables (in the time of the coming of automake 1.12) was peaceful, and tranquillity continued to reign in the pastures of automakery.

And yet, some lands were troubled. Some of the citizens of automake had developed a confusion. They had taken the INCLUDES incantations of old, and mixed them with the AM_ prefix of new to create an AM_INCLUDES monster. ‘This is surely better’, they reasoned — ‘we have global preprocessor flags, but allow it to be overridden by the user due to our use of the AM_ prefix’. These citizens went ahead and used their AM_INCLUDES variable, but were confused when it failed to affect their automagic. They thought the runes were not in their favour, so copied its value into AM_CFLAGS as well, just in case.


These citizens are wrong. AM_INCLUDES does not exist. Do not use it. automake does not recognise it. You are wasting your time. It has no effect.

Use per-target CPPFLAGS (e.g. my_program_name_CPPFLAGS) to specify preprocessor flags. Use per-target CFLAGS to specify C-compiler-only flags. Do not use AM_CPPFLAGS unless you’re sure all flags in it are needed for all targets in the Makefile. Do not use INCLUDES unless you want a big deprecation warning from automake((warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS' (or '*_CPPFLAGS'))) (it is entirely equivalent to AM_CPPFLAGS). Do not use AM_INCLUDES at all.

For the curious, passing -Wall to automake will enable a bunch of warnings which can improve your Makefiles. You can do that using the AUTOMAKE_OPTIONS environment variable, or the AM_INIT_AUTOMAKE macro in configure.ac.


And the citizens of automake lived happily ever after.

by Philip Withnall at December 03, 2014 09:13 AM

November 27, 2014

Jeremy Whiting

Gardening Recipes

You thought this would have something to do with working outside and/or cooking something. You were wrong.

As Albert blogged previously the KDE gardening team's love project this time is KRecipes.
The mailing list has been moved.
The website content has been added to userbase.
The 2.0 release is on upload.kde.org, soon to be moved into place (a new place since download.kde.org hasn't done a krecipes release before).

Anyone that would like a relatively small way to quickly help KRecipes out it could use some patches on reviewboard (group added, sending to the new mailing list already) to port away from Qt3support classes. Some other ideas off the top of my head also:

1. Make it use kunitconversion to convert metric units to imperial units for those of us stuck in countries that use imperial measuring systems.
2. Freshen up the ui a bit so it looks less like a database viewer and more like a recipe viewer/editor.

by Jeremy Whiting (noreply@blogger.com) at November 27, 2014 02:02 AM

November 20, 2014

Neil McGovern

Barbie the Debian Developer

Some people may have seen recently that the Barbie series has a rather sexist book out about Barbie the Computer Engineer. Fortunately, there’s a way to improve this by making your own version.

Thus, I made a short version about Barbie the Debian Developer and init system packager.

(For those who don’t know me, this is satirical. Any resemblance to people is purely coincidental.)

Edit: added text in alt tags. Also, hai reddit!

One day, Debian Developer Barbie decided to package and upload a new init system to Debian, called 'systemd'. I hope everyone else will find it useful, she thought.Oh no says Skipper! You'll never take my init system away from me! It's horrendous and Not The Unix Way! Oh dear said Barbie, What have I let myself in to?Skipper was most upset, and decided that this would not do. It's off to the technical committee with this. They'll surely see sense.Oh no! What's this? The internet decided that the Technical Committee needed to also know everyone's individual views! Bad Internet!There was much discussion and consideration. Opinions were reviewed, rows were had, and months passed. Eventually, a decision was agreed upon.Barbie was successful! The will of the Technical Committee was that systemd would be the default! But wait...Skipper still wasn't happy. We need to make sure this never affects me! I'm going to call for a General Resolution!And so, Ms Devotee was called in to look at the various options. She said that the arguments must stop, and we should all accept the result of the general resolution.The numbers turned and the vote was out. We should simply be most excellent to each other said Ms Devotee. I'm not going to tell you what you should or should not do.Over the next year, the project was able to heal itself and eventually Barbie and Skipper decided to make amends. Now let's work at making Debian better!

by Neil McGovern at November 20, 2014 08:00 PM

November 19, 2014

Simon McVittie

still aiming to be the universal operating system

Debian's latest round of angry mailing list threads have been about some combination of init systems, future direction and project governance. The details aren't particularly important here, and pretty much everything worthwhile in favour of or against each position has already been said several times, but I think this bit is important enough that it bears repeating: the reason I voted "we didn't need this General Resolution" ahead of the other options is that I hope we can continue to use our normal technical and decision-making processes to make Debian 8 the best possible OS distribution for everyone. That includes people who like systemd, people who dislike systemd, people who don't care either way and just want the OS to work, and everyone in between those extremes.

I think that works best when we do things, and least well when a lot of time and energy get diverted into talking about doing things. I've been trying to do my small part of the former by fixing some release-critical bugs so we can release Debian 8. Please join in, and remember to write good unblock requests so our hard-working release team can get through them in a finite time. I realise not everyone will agree with my idea of which bugs, which features and which combinations of packages are highest-priority; that's fine, there are plenty of bugs to go round!

Regarding init systems specifically, Debian 'jessie' currently works with at least systemd-sysv or sysvinit-core as pid 1 (probably also Upstart, but I haven't tried that) and I'm confident that Debian developers won't let either of those regress before it's released as Debian 8.

I expect the freeze for Debian 'stretch' (presumably Debian 9) to be a couple of years away, so it seems premature to say anything about what will or won't be supported there; that depends on what upstream developers do, and what Debian developers do, between now and then. What I can predict is that the components that get useful bug reports, active maintenance, thorough testing, careful review, and similar help from contributors will work better than the things that don't; so if you like a component and want it to be supported in Debian, you can help by, well, supporting it.


PS. If you want the Debian 8 installer to leave you running sysvinit as pid 1 after the first reboot, here's a suitable incantation to add to the kernel command-line in the installer's bootloader. This one certainly worked when KiBi asked for testing a few days ago:

preseed/late_command="in-target apt-get install -y sysvinit-core"

I think that corresponds to this line in a preseeding file, if you use those:

d-i preseed/late_command string in-target apt-get install -y sysvinit-core

A similar apt-get command, without the in-target prefix, should work on an installed system that already has systemd-sysv. Depending on other installed software, you might need to add systemd-shim to the command line too, but when I tried it, apt-get was able to work that out for itself.

If you use aptitude instead of apt-get, double-check what it will do before saying "yes" to this particular switchover: its heuristic for resolving conflicts seems to be rather more trigger-happy about removing packages than the one in apt-get.

November 19, 2014 12:00 AM

November 16, 2014

Tollef Fog Heen

Resigning as a Debian systemd maintainer

Apparently, people care when you, as privileged person (white, male, long-time Debian Developer) throw in the towel because the amount of crap thrown your way just becomes too much. I guess that's good, both because it gives me a soap box for a short while, but also because if enough people talk about how poisonous the well that Debian is has become, we can fix it.

This morning, I resigned as a member of the systemd maintainer team. I then proceeded to leave the relevant IRC channels and announced this on twitter. The responses I've gotten have been almost all been heartwarming. People have generally been offering hugs, saying thanks for the work put into systemd in Debian and so on. I've greatly appreciated those (and I've been getting those before I resigned too, so this isn't just a response to that). I feel bad about leaving the rest of the team, they're a great bunch: competent, caring, funny, wonderful people. On the other hand, at some point I had to draw a line and say "no further".

Debian and its various maintainer teams are a bunch of tribes (with possibly Debian itself being a supertribe). Unlike many other situations, you can be part of multiple tribes. I'm still a member of the DSA tribe for instance. Leaving pkg-systemd means leaving one of my tribes. That hurts. It hurts even more because it feels like a forced exit rather than because I've lost interest or been distracted by other shiny things for long enough that you don't really feel like part of a tribe. That happened with me with debian-installer. It was my baby for a while (with a then quite small team), then a bunch of real life thing interfered and other people picked it up and ran with it and made it greater and more fantastic than before. I kinda lost touch, and while it's still dear to me, I no longer identify as part of the debian-boot tribe.

Now, how did I, standing stout and tall, get forced out of my tribe? I've been a DD for almost 14 years, I should be able to weather any storm, shouldn't I? It turns out that no, the mountain does get worn down by the rain. It's not a single hurtful comment here and there. There's a constant drum about this all being some sort of conspiracy and there are sometimes flares where people wish people involved in systemd would be run over by a bus or just accusations of incompetence.

Our code of conduct says, "assume good faith". If you ever find yourself not doing that, step back, breathe. See if there's a reasonable explanation for why somebody is saying something or behaving in a way that doesn't make sense to you. It might be as simple as your native tongue being English and their being something else.

If you do genuinely disagree with somebody (something which is entirely fine), try not to escalate, even if the stakes are high. Examples from the last year include talking about this as a war and talking about "increasingly bitter rear-guard battles". By using and accepting this terminology, we, as a project, poison ourselves. Sam Hartman puts this better than me:

I'm hoping that we can all take a few minutes to gain empathy for those who disagree with us. Then I'm hoping we can use that understanding to reassure them that they are valued and respected and their concerns considered even when we end up strongly disagreeing with them or valuing different things.

I'd be lying if I said I didn't ever feel the urge to demonise my opponents in discussions. That they're worse, as people, than I am. However, it is imperative to never give in to this, since doing that will diminish us as humans and make the entire project poorer. Civil disagreements with reasonable discussions lead to better technical outcomes, happier humans and a healthier projects.

November 16, 2014 10:55 PM

November 12, 2014

Jeremy Whiting

KDE Fundraiser 2014

In case you didn't hear, KDE is running a fundraiser which helps fund sprints and other expenses of your favorite desktop/applications/games brought to you by the KDE community. Plus there are cool konqi postcards!

Ok, back to your regularly scheduled blog reading, have a nice day.

by Jeremy Whiting (noreply@blogger.com) at November 12, 2014 07:00 AM

November 10, 2014

Gustavo Noronha Silva

Yay, the left won! Or did it?

Originally published on politi.kov

I have been asked by a bunch of friends from outside of Brazil for my opinion regarding the recent elections we had in Brazil, and it is a bit complicated to explain it without some background, so I decided to write this piece providing a bit of history so that people can understand my opinion.

The elections this year were a rematch of our traditional polarization between the workers party (PT) and the social democracy party (PSDB), which has been going on since 1994. PT and PSDB used to be allies. In the 80s, when the dictatorship dropped the law that forbade more than 2 parties, the opposition party, MDB, began breaking up in several smaller ones.

PSDB was founded by politicians and intelectuals who were inspired by Europe’s social democracy and political systems. Parliamentarism, for instance, is one of the historical causes of the party. The workers party had a more grassroots origin, with union leaders, marxist intelectuals and marxist-inspired catholic priests being the main founders. They drew their inspiration from the USSR and Cuba, and were very close to social movements.

Lula and FHC campaigning together in 1981, by Clóvis Cranchi Sobrinho

Lula (PT) and FHC (PSDB) campaigning together in 1981, by Clóvis Cranchi Sobrinho

Some people have celebrated the reelection of Dilma Roussef as a victory of the left against the right. In my opinion that view is wrong for several reasons. First, because I disagree that PSDB and Aécio Neves in particular are right-wing, both in terms of economics and social/moral issues. Second, because I believe Dilma’s first government has taken a quite severe turn to the right in several topics that matter a lot to me. Since comparisons with PSDB’s government during the 90s has been one of the main strategies of the campaign this year, I’ll argue why I think it was actually a pretty good government with a lot of left in it.

Unlike what happens in most other places, Brazil does not really have an actual right-wing party, economics-wise. Although we might see the birth of a couple in the near future, no current party is really against public health, education and social security being provided by the state as rights, or wants to decrease state size and lower taxes significantly. It should come as no surprise that even though it has undergone a lot of liberal reforms over the last 20 years, Brazil is still a very closed country, with very high import tariffs and a huge presence of the state in the economy. There is a certain consensus about all of that, with disagreements being essentially on implementation details, not goals.

On the other hand, and contrary to popular belief, when it comes to social and moral issues we are a very conservative people. Ironically, the two parties which have been in power over the last 20 years are quite progressive, being historically proponents of diversity, minorities rights, reproductive rights. They have had to compromise on those causes to become viable alternatives, given the conservative nature of the majority of the voters.

Despite their different origins and beliefs, both parties share socialist inclinations and were allies from the onset. That changed in 1992, when president Collor, who had been elected on a runoff against Lula (who PSDB supported), was impeached by Congress for corruption. With no formal political support and a chaotic situation in his hands, Itamar Franco, the vice president, called for a “national union” government to go through the last two years of his term. PSDB answered the call, but the workers party decided against being part of the government.

Fernando Henrique Cardoso, a sociologist who was one of the leaders of PSDB was chosen to lead the Foreign Relations Ministry, but a few months later got nominated to the Economy. At the time, Brazil lived under hyperinflation of close to 1000% a year, and several stabilization plans had been attempted. Economy Ministers did not last very much in office at the time. FHC gathered a team of economists and sponsored their stabilization plan, which turned out to be highly successful: the Plano Real (“Real Plan”). In addition to introducing a new currency, something that was becoming pretty common to Brazilians by then, it also attacked the structural causes of inflation.

Lula was counting on the failure of the Plano Real when he ran against FHC in 1994, but the plan succeeded, giving FHC two terms as president. During those two terms, FHC introduced several institutional changes that made Brazil a saner country. In addition to the hyperinflation, Brazil had lived a debt crises for decades and was still in default. FHC’s team renegotiated the debts, reopened lines of credit, but most importantly, introduced reforms that made the Brazilian finances and financial system credible.

The problem was not even that Brazil had a fiscal déficit, it just did not have any control whatsoever of money supply and budget. Banks, regardless of whether they were private or public, had very little regulation and took advantage of the hyperinflation to hide monstrous holes in their balances. When inflation was gone and regulation became more strict, those became apparent, and it was pretty clear that the system would collapse if nothing was done.

Some people like to say that FHC was a president who ruled for the rich and didn’t care about the poor. I think the way the potential collapse of the banking system was handled is a great counter-example of that. The government passed laws that made the owners of the banks responsible for the financial problems, regardless of whether caused by mismanagement or fraud. If a bank went under, the central bank intervened and added enough money to protect the deposits, but that money was a loan that had to be repaid by the owners of the bank, and the owners’ properties were added as collateral to the loan. As a brazilian journalist once said, the people did not risk losing their deposits, the bankers did risk losing the banks, though. Today, we have a separate fund, filled with money from the banks, that does what the central bank did back then when required.

Compare that to countries where the banking system was saved with tax payer money and executives kept getting huge bonuses regardless, while owners kept their profits. It is hard to find an initiative that is more focused on the public interest against the interest of the rich people who caused the problem. This legislation, called PROER, is still in place today, and it came along with solid regulation of the banking system. It should come as no surprise that Brazil went through the financial crisis of 2008 with not a single hiccup of the banking system and no fear of bank runs. Despite having been against PROER back in the day, Lula celebrated its existence in 2008, when it was clear it was one of the reasons we would not suffer much. He even advertised it as something that should be adopted by the US and Europe.

It is also pretty common to hear that under FHC social questions were not a priority. I believe it is pretty simple to see that that was not the case both by inspecting the growth of social spending and the improvement of social indicators for the period, such as UN’s human development index. One area in which people are particularly critical of the FHC government is the investment on higher education, and they are actually quite right. Brazil has free Federal universities and those did not get a lot of priority in the 90s. However, I would argue that while it is a matter of priorities, it is not one of education versus something else, but rather of what to invest on inside education. The reality is basic education was the priority.

When FHC came to power, Brazil had a significant number of children who were not going to school at all. The goal was to make access to schools universal for young children, and that goal was reached. Every child has been going to school since the early 2000s, and that is a significant achievement which reaches the poorest. While the federal universities are attended essentially by the Brazilian elite, given the difficulty of passing the exams and the relative lack of quality of free public schools compared to private ones, which is still a reality to this day, investment on getting children to even go to school for the early years has a significant impact on the lives of the poorest.

It is important to remember that getting every child to go to school is also what gave birth to one of the most celebrated programs from the Lula era: Bolsa Família (“Family Allowance”) is a direct money transfer to poor families, particularly those who have children and has been an important contribution to lowering inequality and getting people out of extreme poverty. To get the money, the families need to ensure their children are 1) attending school and 2) getting vaccinated.

That program comes from the FHC government, in which it was created with the name Bolsa Escola (“School Allowance”), in its turn inspired by a program of the same name by governor Cristovam Buarque, from PT. What Lula did, and he deserves a lot of credit for this, was to merge a series of smaller programs with Bolsa Escola, and then expand the program to ensure it got to more and more people. Interestingly, during the announcement of the program he credited the idea of doing that to a state governor from PSDB. You can see why I think these two should be allies again.

When faced with all these arguments, people will eventually say that FHC was bad because he privatized companies and used orthodox economic policies. Well, if that is what it takes, then we’ll have to take Lula down with him, because his first term was essentially a continuation of FHC’s second term: orthodox economic policies to keep inflation down, along with privatization of several state-owned companies and banks. But Lula, whom I voted for and whose government I believe was a good one, is not my subject: Dilma is.

On Lula’s second term, Dilma gained a lot of power when other major leaders of PT went down for corruption. She became second in command and started leading several programs. A big believer in developmentalism, she started pushing for a bigger role of the state in the coordination of the productive sector, with a clear focus on growing the industrial base.

One of the initiatives she sponsored was a sizable increase on the number and size of subsidized loans given out by the national development bank (BNDES). Brazil started an unnofficial “national champions” program, where the government elected a few big companies to get a huge amount of subsidized credit.

The goal was for these selected firms to get big enough to be competitive on the global market. The criteria for the choices is completely opaque, if it even exists, and includes handing out milions in subsidized credit for Eike Batista, who became Brazil’s richest enterpreneur for a while, and lost pretty much everything when it became clear the oil would not be pumping out of his camps after all, sinking with them a huge amount of public funds invested by BNDES.

The way this policy was enacted, it is unclear how much it really costs in terms of public funds: the Brazilian treasury emits debt to capitalize, lends that money to BNDES with higher than market interest, and BNDES then lends it out to the big companies with a lower than market interest rate. Although it is obviously unsustainable, the problem does not yet show in the balance because the grace period for BNDES debt with the treasury is 2040. The fact that this has a cost and, perhaps more importantly, a huge opportunity cost is not clear because it is not part of the government budget. Why are we putting money in this rather than quadrupling Bolsa Família, which studies show generates 1,78 reais in GDP for every 1 real invested? Worse, why are we not even updating Bolsa Família enough to cover inflation?

When Dilma got elected in 2010, the first signs were pretty bad. She was already seen as someone who did not care much for the environment, and on her first month in power she made good on that promise by pushing to get the Belo Monte Dam building started as soon as possible regardless of conditionalities being satisfied. To this day there are several issues with how the building of the dam is going: the handling of the indigenous people and the small city nearby are lacking, conditionalities are not met.

Beyond Belo Monte, indigenous leaders are being assassinated, deforestation in the Amazon forest has increased by 122% in 2014 alone. Dilma’s answer to people who question her on these kinds of issues is essentially: “would you rather not have electric power?”

Her populist authoritarian nature and obsession with industry are also pretty evident when it comes to her policies in the energy area as a whole. She showed up in national tv on the eve of our independence day celebration to announce a reduction in electric tariffs, mainly for industry, but also for homes. Nobody really knew how. The following week she sent a fast-track project to Congress to automatically renew concessions of power grid operators, requiring those who accepted it to lower tariffs, instead of doing an auction, which was already necessary anyway because the concessions were up on 2015. There was no discussion with stakeholders, there was just a populist announcement and a great deal of rhethoric to paint anyone who opposed as being against the people.

And now, everything went into the crapper because that represented a breach of contract that required indemnification, and we had a pretty bad drought that made power more expensive given the need to turn on the thermal generators. Combining the costs of the thermal generation, indemnity, and financial fallout that the grid operators suffered, we are already at 105 billion reais and counting, nobody knows how high the cost will reach. Any reduction in tariffs has long been invalidated. And the fact that industry has lowered production significantly ends up being good news, we would probably be under rationing already if that was not the case.

You would expect someone who fought a dictatorship to be pretty good in terms of human and civil rights. What we see in reality is a lack of respect for those things. During the world cup, Dilma has put the army on the streets and has supported arbitrary behaviour from state polices throughout the country. They jailed a bunch of demonstrators preemptively. No shit. The would be demonstrators were kept in jail throghout the tournament under false accusations. Dilma’s Minister of Justice said several times that the case against them was solid and that the arrests were legal, but it turned out the case simply did not exist. Just this week we had a number of executions orchestrated by policemen in the state of Pará and there is zero reaction from the federal government.

In the oil industry, Dilma has enacted a policy of subsidizing gas prices by using a fixed price that used to be lower than the international prices (it is no longer the case with the fall in international prices). That would not be a problem if Brazil was selfsufficient in oild and gas, which we are not: we had to import a significant amount of both. The implicit subsidy cost Petrobrás a huge amount of cash – the more gas it sold, the bigger the losses. This lead not only to decreasing the company’s market value (it is a state-controlled, but open company), but to reducing its capacity of investment as well.

That is more problematic than it sounds because, with our current concession model, every single oil camp needs to have Petrobrás as a member of the consortium. Limiting the company’s investment capacity limits the rate at which our pre-salt oil camps can be explored and thus the speed at which we can become selfsufficient. Chicken and egg anyone?

To make things worse, Dilma has made policies that lowered taxes on car production, used to foster economic activity during the crisis in 2008-2010, essentially permanent. This lead to a significant increase in traffic and polution on Brazilian cities, while at the same time increasing the pressure on Petrobrás, which had to import more and more gas. Meanwhile, Brazilian cities suffer from a severe lack of mobility infrastructure. A recent study has shown that Brazil has spend almost twice as much subsidized money on pro-car policies than on pro-mass transit projects. Talk about good usage of public funds.

One of the only remaining good news the government was still able to mention was the constant reduction in extreme poverty. Dilma was actually ellected promising to erradicate extreme poverty and changed the government’s slogan to “A rich country is a country with no poverty” (País rico é país sem pobreza). Well, it turns out all of these policies caused inequality and extreme poverty both to stop falling as of 2013. And given the policies were actually deepened in 2014, I believe it is very likely we’ll see an increase in both when we get the data for 2014, next year.

Other than that, her policies ended up being a complete failure. Despite giving tax benefits to several sectors, investment has fallen, growth has fallen and inflation is quite high at 6,6% for the last 12 months. In terms of minorities, her government has been a severe set back, with the government going back on educational material against homophoby saying it would not do “advertisement of sexual choice”, and going back on a decree that allowed the public health system to perform abortions on the cases allowed by the law (essentially if the woman has been raped).

Looking at Dilma’s policies, I really can’t see that much of the left, honestly. So why, you might ask, has this victory been deemed a victory of the left over the right? My explanation is the aura the workers party still manages to keep over itself. There’s a notion that whatever PT does, it will still be more to the left than PSDB, which I think is just crazy.

There is also a fair amount of idealizing Dilma just because she is Lula’s protegé. People will forgive anything, provided it is the workers party doing it. Thankfully, the number of people aligned on the left that supported the candidate from PSDB this election tells me this is changing quite rapidly. Hopefully that leads to PT having to reinvent itself, and get in touch with the left again.

by kov at November 10, 2014 11:38 PM

November 09, 2014

Jeremy Whiting

Autostart in Plasma 5

In the past few days as I got more and more annoyed that git pushes and svn updates and commits were always asking for my ssh key password I looked at what the current status of plasma autostart is. When I set up my plasma 5 environment I copied all the scripts that were in ~/.kde/Autostart to ~/.config/autostart thinking they would "just work" there, but it turns out they don't. My needs are simple I only had two .sh bash scripts in there, one for running ksshaskpass to ssh-add my ssh key. The other to launch synergys on login. Both scripts showed up in the autostart kcm, but neither was getting launched at login time.

Digging a bit in the code of kinit and klauncher I found that when it looks at ~/.config/autostart it only looks for .desktop files. It turns out this comes from an xdg spec for autostart scripts. Ok, simple enough, so I removed my synergys.sh script and in the autostart kcm created a new autostart item that launches synergys for me at login time. Works like a charm.

Next I went to look at ssh-add using ksshaskpass (there's a kf5 port of ksshaskpass in kdereview now by the way, which works pretty well and has some small improvements from the kdelibs4 version). For one thing, files that aren't .desktop files in ~/.config/autostart aren't getting launched, because klauncher doesn't even look at them. To fix this the autostart kcm was modified to not show these anymore.

The other question then is where should plain bash scripts be put that users and sysadmins used to put into ~/.kde/Autostart ? One solution is to put them into ~/.config/plasma-workspace/env/ . This is what I did here with ssh-add.sh and that works fine. The only thing to note though is that these are sourced, not executed, and they are sourced before plasma has started, so they can set environment variables and such if needed. This may not be ideal for scripts that need to talk to running services, so we may need to add something to ksmserver to launch scripts from ~/.config/ksm-autostart or something like that. There's a bug about it here https://bugs.kde.org/show_bug.cgi?id=338242 already, so don't file new bugs about this issue.

by Jeremy Whiting (noreply@blogger.com) at November 09, 2014 09:47 PM

October 31, 2014

Philip Withnall

Introduction to ICE and libnice

As part of the series of tea time talks we do within Collabora, I recently got to refresh my knowledge of STUN, TURN and ICE (the protocols for NAT traversal) and give an introductory talk on how they all fit together within the context of libnice.

Since the talk might be useful (and perhaps even interesting) to a wider audience, I’ve made it available: slides, handout and source (git). It’s under CC-BY-SA 4.0. Please leave comments if anything is unclear, incorrect, or could do with more in-depth coverage!

by Philip Withnall at October 31, 2014 10:02 AM

Recent improvements in libnice

For the past several months, Olivier Crête and I have been working on a project using libnice at Collabora, which is now coming to a close. Through the project we’ve managed to add a number of large, new features to libnice, and implement hundreds (no exaggeration) of cleanups and bug fixes. All of this work was done upstream, and is available in libnice 0.1.8, released recently! GLib has also gained a number of networking fixes, API additions and documentation improvements.

tl;dr: libnice now has a GIOStream implementation, support for scatter–gather send and receive, and more mature pseudo-TCP support — so its API should be much nicer to integrate; GLib has gained a number of fixes.

Firstly, what is libnice? It’s a GLib implementation of ICE, the standard protocol for NAT traversal. Briefly, NAT traversal is needed when two hosts want to communicate peer-to-peer in a network where there is at least one NAT translator between them, meaning that at least one of the hosts cannot directly address the other until a mapping is created in the NAT translator. This is a very common situation (due to the shortage of IPv4 addresses, and the consequence that most home routers act as NAT translators) and affects virtually all peer-to-peer communications. It’s well covered in the literature, and the rest of this post will assume a basic understanding of NAT and ICE, a topic about which I recently gave a talk.

Conceptually, libnice exists just to create a reliable (TCP-like) or unreliable (UDP-like) socket which connects your host with a remote one in a manner that traverses any intervening NATs. At its core, it is effectively an implementation of send(), recv(), and some ancillary functions to negotiate the ICE stream at startup time.

The biggest change is the addition of nice_agent_get_io_stream(), and the GIOStream subclass it returns. This allows reliable ICE streams to be used via GIOStream, with all the API sugar which comes with GIO streams — for example, g_output_stream_splice(). Unreliable (UDP-like) ICE streams can’t be used this way because they’re not technically streams.

Highly related, the original receive API has been augmented with scatter–gather support in the form of a recvmmsg()-like API: nice_agent_recv_messages(). Along with appropriate improvements to libnice’s underlying socket implementations (the most obscure of which are still to be plumbed in), this allows performance improvements by batching messages, reducing the number of system calls needed for communication. Furthermore (perhaps more importantly) it reduces memory copies when assembling and parsing packets, by allowing the packets to be split across multiple non-contiguous buffers. This is a well-studied and long-known performance technique in networking, and it’s nice that libnice now supports it.

So, if you have an ICE connection (stream 1 on agent, with 2 components) exchanging packets with 20B headers and variable-length payloads, instead of:

nice_agent_attach_recv (agent, 1, 1, main_context, recv_cb, NULL);
nice_agent_attach_recv (agent, 1, 2, main_context, recv_cb, NULL);

…

static void
recv_cb (NiceAgent *agent, guint stream_id, guint component_id,
         guint len, const gchar *buf, gpointer user_data)
{
    if (stream_id != 1 ||
        (component_id != 1 && component_id != 2)) {
        g_assert_not_reached ();
    }

    if (parse_header (buf)) {
        if (component_id == 1)
            parse_component1_data (buf + 20, len - 20);
        else
            parse_component2_data (buf + 20, len - 20);
    }
}

…

static void
send_to_component (guint component_id,
                   const gchar *data_buf, gsize data_len)
{
    gsize len = 20 + data_len;
    guint8 *buf = malloc (len);

    build_header (buf);
    memcpy (buf + 20, data, data_len);

    if (nice_agent_send (agent, 1, component_id,
                         len, buf) != len) {
        /* Handle the error */
    }
}

you can now do:

/* Only set up 1 NiceInputMessage as an illustration. */

static guint8 buf1_1[20];  /* header */
static guint8 buf1_2[1024];  /* payload size limit */
static GInputVector buffers1[2] = {
    { &buf1_1, sizeof (buf1_1) },  /* header */
    { &buf1_2, sizeof (buf1_2) },  /* payload */
};
static NiceInputMessage messages[1] = {
    buffers1, G_N_ELEMENTS (buffers1),
    NULL, 0
};
GError *error = NULL;

n_messages = nice_agent_recv_messages (agent, 1, 1, &messages,
                                       G_N_ELEMENTS (messages),
                                       NULL, &error);
if (n_messages == 0 || error != NULL) {
    /* Handle the EOS or error. */
    if (error != NULL)
        g_error ("Error: %s", error->message);
    return;
}

/* Component 2 can be handled similarly and code paths combined. */
for (i = 0; i &lt; n_messages; i++) {
    NiceInputMessage *message = &messages[i];

    if (parse_header (message->buffers[0].buffer)) {
        parse_component1_data (message->buffers[1].buffer,
                               message->buffers[1].size);
    }
}

…

static void
send_to_component (guint component_id, const gchar *data_buf,
                   gsize data_len)
{
    GError *error = NULL;
    guint8 header_buf[20];
    GOutputVector vec[2] = {
        { header_buf, sizeof (header_buf) },
        { data_buf, data_len },
    };
    NiceOutputMessage message = { vec, G_N_ELEMENTS (vec) };

    build_header (header_buf);

    if (nice_agent_send_messages_nonblocking (agent, 1, component_id,
                                              &message, 1, NULL,
                                              &error) != 1) {
        /* Handle the error */
        g_error ("Error: %s", error->message);
    }
}

libnice has also gained non-blocking variants of its I/O functions. Previously, one had to explicitly attach a libnice stream to a GMainContext to start receiving packets. Packets would be delivered individually via a callback function (set with nice_agent_attach_recv()), which was inefficient and made for awkward control flow. Now, the non-blocking I/O functions can be used with a custom GSource from g_pollable_input_stream_create_source() to allow for more flexible reception of packets using the more standard GLib pattern of attaching a GSource to the GMainContext and in its callback, calling g_pollable_input_stream_read_nonblocking() until all pending packets have been read. libnice’s internal timers (used for retransmit timeouts, etc.) are automatically added to the GMainContext passed into nice_agent_new() at construction time, which you must run all the time as before.

GIOStream *stream = nice_agent_get_io_stream (agent, 1, 1);
GInputStream *istream;
GPollableInputStream *pollable_istream;

istream = g_io_stream_get_input_stream (stream);
pollable_istream = G_POLLABLE_INPUT_STREAM ();

source = g_pollable_input_stream_create_source (pollable_istream, NULL);
g_source_set_callback (source, readable_cb, NULL, pollable_istream);
g_source_attach (main_context, source);

static gboolean
readable_cb (gpointer user_data)
{
    GPollableInputStream *pollable_istream = user_data;
    GError *error = NULL;
    guint8 buf[1024];  /* whatever the maximum packet size is */

    /* Read packets until the queue is empty. */
    while ( (len = g_pollable_input_stream_read_nonblocking (pollable_istream,
                                                             buf, sizeof (buf),
                                                             NULL,
                                                             &error) ) > 0) {
        /* Do something with the received packet. */
    }

    if (error != NULL) {
        /* Handle the error. */
    }
}

libnice also gained much-improved support for restarting individual streams using ICE restarts with the addition of nice_agent_restart_stream(), switching TURN relays with nice_agent_forget_relays(), plus a number of bug fixes.

Finally, FIN/ACK support has been added to libnice’s pseudo-TCP implementation. The code was originally based on Google’s libjingle pseudo-TCP, establishing a reliable connection over UDP by encapsulating TCP-like packets within UDP. This implemented the basics of TCP, but left things like the closing FIN/ACK handshake to higher-level protocols. Fine for Google, but not for our use case, so we added support for that. Furthermore, we needed to layer TLS over a pseudo-TCP connection using GTlsConnection, which required implementing half-duplex close support and fixing a few nasty leaks in GTlsConnection.

Thanks to the libnice community for testing out the changes, and thanks to the GLib developers for patiently reviewing the stream of tiny documentation fixes and several larger GLib patches! All of the libnice API changes are shown on the handy upstream-tracker.org tool.

by Philip Withnall at October 31, 2014 09:50 AM

October 28, 2014

Jeremy Whiting

Accessibility is alive (QtSpeech progress, Jovie's deprecation)

For some time I've been considering what to do about Jovie which was previously known as ktts (KDE Text To Speech). Since before the first KDE Frameworks release actually, since kdelibs used to host a dbus interface definition for the KSpeech dbus interface that ktts and then Jovie implemented. I have a qt5 frameworks branch of Jovie, but it didn't make much sense to port it, since a lot of it is or could become part of the upcoming QtSpeech module. So Jovie has no official qt5 port and wont be getting one either.

What will Okular, KNotify, and other applications that want to speak to users do instead? The answer is QtSpeech. QtSpeech is a project started by Frederik Gladhorn to bring speech api's to all the platforms that Qt supports. It is still in its infancy, but is quickly improving. A few weeks ago when I built my kf5 stack with kdesrc-build I noticed that kdepim(libs?) was depending on it and it hasn't been released yet, so I got motivated to send some improvements to qt-project. Frederik and Laurent Montel have been pushing fixes and improving it also. It is as easy if not easier to use than the KSpeech dbus api (and doesn't require dbus either) and can be used to speak text on linux/unix, osx, windows, and android platforms so far. If you are an expert on any of these platforms please send patches to implement api on these platforms in their backends, the more eyes on this project the faster we can get it solidified and released.

You may be asking but what about feature X in Jovie that I will miss desperately. Yes there are a few things that QtSpeech will not do that Jovie did. These will either need to be done in individual applications or we can create a small framework to add these features (or possibly add them to QtSpeech itself if they make sense there). The features I'm thinking of are:

1. Filtering - Changing ": Hey QtSpeech is really coming along now" to "jpwhiting says 'Hey QtSpeech is really coming along now'" for KNotifications and the like. This could likely be implemented easily in knotify itself and exposed in the notifications configuration dialog.
2. Voice switching - Changing which voice to use based on the text, or the application it is coming from or anything else. This might make sense in QtSpeech itself, but time will tell if it's a wanted/needed feature.
3. User configuration - Jovie had a decent (not ideal, but it was functional) ui to set some voice preferences, such as which voice you wanted to use, which pitch, volume, speed, gender, etc. This will become the only part of Jovie that will get ported, which is a KDE Control Module for speech-dispatcher settings. This may also change over time, as speech-dispatcher itself possibly grows a ui for it's settings.

All in all, progress is being made. I expect QtSpeech to be ready for release with Qt 5.5, but we'll see what happens.

by Jeremy Whiting (noreply@blogger.com) at October 28, 2014 10:05 PM