Planet Collabora

August 23, 2016

Helen Koike

Increased performance of emulated NVMe devices

Nowadays, in Google Cloud Engine (GCE), it is possible to attach a local SSD with the NVMe interface to your virtual machine. Unfortunately, you only get a good number of iops (input/output operations per second) if you instantiate a machine with nvme-backports-debian-7-wheezy image; other available distributions on GCE will have a lower number of iops.

It turns out that Google's Virtual Machine Monitor (aka Hypervisor) implements a custom NVMe command that allows it to increase up to 4 times the number of  iops (note: this is from what I've tested so far, but it seems to be possible to get up to 5 times faster according to the original commit message; check the  Technical Details sessions to see how this is possible), however the kernel you use needs to support it and this is not yet the case with the mainline kernel.

This is not exclusive to GCE as Google released a patch not only to the kernel  but also to the qemu and is available here.

Collabora has been helping update, refactor and review the patches to the Linux Kernel to send it upstream, however since this is not yet an official nvme standard, it shouldn't be merged into Kernel mainline, as its specification may still receive changes.

Seeing as it considerably increases performance, the feature is in the process of being discussed and proposed to the NVMe workgroup with Collabora's help.
While the nvmexpress.org seems interested in adding an official extension to stardarize it, as published in the mailing list, nothing has been defined yet, as this is a very recent discussion and it can take up to a year to be ratified by the NVMe workgroup.

So, for the time being, you can get a more recent version of the patch and install the driver yourself here: https://git.collabora.com/cgit/user/koike/linux.git/log/?h=nvme/dev

How it works?

Technical details

 

The NVMe interface basicaly works with command queues. The drive writes a command in a region known to both (driver and device controller) and then updates the tail of the queue, writting to an MMIO register called doorbell.

In an environment with several guest OSes on top of a VMM sharing a resource, communication between the guest OS and the real device is usually trapped by the VMM. As an MMIO is usually a syncronous acces to the device, it means that every MMIO access will cause a trap.

Example of emulated device in the VMM
The main idea here is to decrease the number of traps to the VMM by reducing the number of writtes to the doorbells.

This is achieved in two ways:
    1) Batching; or
    2) Letting the VMM pull the current doorbell value when it is already in execution.

The first one is easy, we can wait X commands to be written in the queue to ring the doorbell.
The second one is a bit more complicated. The guest OS needs to inform the emulated device in the VMM where it can pull the doorbell values, and the emulated NVMe device needs to inform the guest OS that it can restart the counter of X.

This is what this new feature does:
It adds a new command in the NVMe interface where the driver can send to the NVMe device controller two memory buffers:
1) A buffer where the real doorbell values are: Instead of writting to the MMIO  doorbell, the driver writtes the value in this buffer; and
2) Another buffer with a hint from the controller about how many commands the driver can write in the queue without ringing the doorbell.


The exact technical details may still change in the future, especially on how to properly implement the second item above. It is also very likely that Google's patches won't be compliant with the future ratified standard.

For the time being though, you can use the Collabora tree. Please let me know if you have any comments/feedback!

by Helen Fornazier (noreply@blogger.com) at August 23, 2016 09:25 PM

August 12, 2016

Philip Withnall

Thoughts about reviewing large patchsets

I have recently been involved in reviewing some large feature patchsets for a project at work, and thought it might be interesting to discuss some of the principles we have been trying to stick to when going about these reviews.

These are just suggestions which will not apply verbatim to every project, but they might provoke some ideas which are relevant to your project. In many cases, a developer might find a checklist is not useful for them — it is too rigid, or slows down the pace of development too much. Checklists are probably best treated as strong suggestions, rather than strict requirements for a review to pass; developers need to use them as reminders for things they should think about when submitting a patch, rather than a box-ticking exercise. Checklists probably work better in larger projects with lots of infrequent or inexperienced contributors, who may not be aware of all of the conventions of the project.

Overall principles

In order to break a set of reviews down into chunks, there a few key principles to stick to:

  • Review patches which are as small as possible, but no smaller (see here, here and here)
  • Learn from each review so the same review comments do not need to be made more than once
  • Use automated tools to eliminate many of the repetitive and time consuming parts of patch review and rework
  • Do high-level API review first, ideally before implementing those APIs, to avoid unnecessary rework

Pre-submission checklist

(A rationale for each of these points is given in the section below to avoid cluttering this one.)

This is the pre-submission checklist we have been using to check patches against before submission for review:

  1. All new code follows the coding guidelines, especially the namespacing guidelines, memory management guidelines, pre- and post-condition guidelines, and introspection guidelines — some key points from these are pulled out below, but these are not the only points to pay attention to.
  2. All new public API must be namespaced correctly.
  3. All new public API must have a complete and useful documentation comment.
  4. All new public API documentation comments must have GObject Introspection annotations where appropriate; g-ir-scanner (part of the build process) must emit no warnings when run with --warn-all --warn-error (which should be set by $(WARN_SCANNERFLAGS) from AX_COMPILER_FLAGS).
  5. All new public methods must have pre- and post-conditions to enforce constraints on the accepted parameter values.
  6. The code must compile without warnings, after ensuring that AX_COMPILER_FLAGS is used and enabled in configure.ac (if it is correctly enabled, compiling the module should fail if there are any compiler warnings) — remember to add $(WARN_CFLAGS), $(WARN_LDFLAGS) and $(WARN_SCANNERFLAGS) to new Makefile.am targets as appropriate.
  7. The introduction documentation comment for each new object must give a usage example for each of the main ways that object is intended to be used.
  8. All new code must be formatted as per the project’s coding guidelines.
  9. If possible, there should be an example program for each new feature or object, which can be used to manually test that functionality — these examples may be submitted in a separate patch from the object implementation, but must be submitted at the same time as the implementation in order to allow review in parallel. Example programs must be usable when installed or uninstalled, so they can be used during development and on production machines.
  10. There must be automated tests (using the GTest framework in GLib) for construction of each new object, and for getting and setting each of its properties.
  11. The code coverage of the automated tests must be checked (using make check-code-coverage and AX_CODE_COVERAGE) before submission, and if it’s possible to add more automated tests (and for them to be reliable) to improve the coverage, this should be done; the final code coverage figure for the object should be mentioned in a comment on the patch review, and it would be helpful to have the lcov reports for the object saved somewhere for analysis as part of the review.
  12. There must be no definite memory leaks reported by Valgrind when running the automated tests under it (using AX_VALGRIND_CHECK and make check-valgrind).
  13. All automated tests must be installed as installed-tests so they can be run as integration tests on a production system.
  14. make distcheck must pass before submission of any patch, especially if it touches the build system.
  15. All new code has been checked to ensure it doesn’t contradict review comments from previous reviews of other patches (i.e. we want to avoid making the same review comments on every submitted patch).
  16. Commit messages must explain why they make the changes they do.

Rationales

  1. Each coding guideline has its own rationale for why it’s useful, and many of them significantly affect the structure of a patch, so are important to get right early on.
  2. Namespacing is important for the correct functioning of a lot of the developer tools (for example, GObject Introspection), and to avoid symbol collisions between libraries — checking it is a very mechanical process which it is best to not have to spend review time on.
  3. Documentation comments are useful to both the reviewer and to end users of the API — for the reviewer, they act as an explanation of why a particular API is necessary, how it is meant to be used, and can provide insight into implementation choices. These are questions which the reviewer would otherwise have to ask in the review, so writing them up lucidly in a documentation comment saves time in the long run.
  4. GObject Introspection annotations are a requirement for the platform’s language bindings (to JavaScript or Python, for example) to work, so must be added at some point. Fixing the error messages from g-ir-scanner is sufficient to ensure that the API can be introspected.
  5. Pre- and post-conditions are a form of assertion in the code, which check for programmer errors at runtime. If they are used consistently throughout the code on every API entry point, they can catch programmer errors much nearer their origin than otherwise, which speeds up debugging both during development of the library, and when end users are using the public APIs. They also act as a loose form of documentation of what each API will allow as its inputs and outputs, which helps review (see the comments about documentation above).
  6. The set of compiler warnings enabled by AX_COMPILER_FLAGS have been chosen to balance false positives against false negatives in detecting bugs in the code. Each compiler warning typically identifies a single bug in the code which would otherwise have to be fixed later in the life of the library — fixing bugs later is always more expensive in terms of debugging time.
  7. Usage examples are another form of documentation (as discussed above), which specifically make it clearer to a reviewer how a particular feature is intended to be used. In writing usage examples, the author of a patch can often notice awkwardnesses in their API design, which can then be fixed before review — this is faster than them being caught in review and sent back for modification.
  8. Well formatted code is a lot easier to read and review than poorly formatted code. It allows the reviewer to think about the function of the code they are reviewing, rather than (for example) which function call a given argument actually applies to, or which block of code a statement is actually part of.
  9. Example programs are a loose form of testing, and also act as usage examples and documentation for the feature (see above). They provide an easy way for the reviewer to test a feature, especially if it affects a UI or has some interactive element; this is very hard to do by simply looking at the code in a patch. Their biggest benefit will be when the feature is modified in future — the example programs can be used to test changes to the feature and ensure that its behaviour changes (or does not) as expected.
  10. For each unit test for a piece of code, the behaviour checked by that unit test can be guaranteed to be unchanged across modifications to the code in future. This prevents regressions (especially if the unit tests for the project are set up to be run automatically on each commit by a continuous integration system). The value of unit tests when initially implementing a feature is in the way they guide API design to be testable in the first place. It is often the case that an API will be written without unit tests, and later someone will try to add unit tests and find that the API is untestable; typically because it relies on internal state which the test harness cannot affect. By that point, the API is stable and cannot be changed to allow testing.
  11. Looking at code coverage reports is a good way to check that unit tests are actually checking what they are expected to check about the code. Code coverage provides a simple, coarse-grained metric of code quality — the quality of untested code is unknown.
  12. Every memory leak is a bug, and hence needs to be fixed at some point. Checking for memory leaks in a code review is a very mechanical, time-consuming process. If memory leaks can be detected automatically, by using valgrind on the unit tests, this reduces the amount of time needed to catch them during review. This is an area where higher code coverage provides immediate benefits. Another way to avoid leaks is to use g_autoptr() to automatically free memory when leaving a control block.
  13. If all automated tests are available, they can be run as part of system-wide integration tests, to check that the project behaviour doesn’t change when other system libraries (its dependencies) are changed. This is one of the motivations behind installed-tests. This is a one-time setup needed for your project, and once it’s set up, does not need to be done for each commit.
  14. make distcheck ensures that a tarball can be created successfully from the code, which entails building it, running all the unit tests, and checking that examples compile.
  15. If each patch is updated to learn from the results of previous patch reviews, the amount of time spent making and explaining repeated patch review comments should be significantly reduced, which saves everyone’s time.
  16. Commit messages are a form of documentation of the changes being made to a project. They should explain the motivation behind the changes, and clarify any design decisions which the author thinks the reviewer might question. If a commit message is inadequate, the reviewer is going to ask questions in the review which could have been avoided otherwise.

by Philip Withnall at August 12, 2016 07:22 AM

August 10, 2016

Jeremy Whiting

I'm here

It's been over a year since I posted anything. That's way too long. So what's going on in the projects I care about lately?
Speech-dispatcher will soon have a 0.8.5 release. There's a set of patches in the works on a branch on github that moves the audio to the server so we will be able to do useful things like label each pulse audio output with the client application name rather than a generic sd_espeak, sd_pico name for each output module. This way you'll see stuff like "Konsole speech", "Konversation speech" etc. volume controls for the speech volume control of the application's speech output. In order for this to work some other refactoring needs to be done in the espeak and other modules so stopping audio playback will be immediate etc.
QtSpeech has had some work done. The android and windows versions have been seeing some love lately. I'm optimistic that it can be included in an upcoming Qt release. Though I'm not sure why it hasn't been included yet.
KMouth has been waiting a QtSpeech release in order for it's kf5/qt5 branch to be merged to master.
KNewStuff could use some work. There was talk at a recent conference about adding the properties and such necessary for it to be used from QML. I'll follow up on that and see what has become of it.
All in all I feel like I haven't been around as much in the above projects as I'd like to have been. Life is busy and work is busy and such. I plan to spend a bit more time on these though. Even if it means I get slightly less sleep. Looking forward to a good rest of the year.

by Jeremy Whiting (noreply@blogger.com) at August 10, 2016 08:40 PM

July 26, 2016

Gustavo Padovan

Collabora contributions to Linux Kernel 4.7

Linux Kernel 4.7 was released this week with a total of 36 contributions from five Collabora engineers. It includes the first contributions from Helen as Collaboran and the first ever contributions on the kernel from Robert Foss. Here are some of the highlights of the work Collabora have done on Linux Kernel 4.7.

Enric added support for the Analogix anx78xx DRM Bridge and fixed two SD Card related issues on OMAP igep00x0: fix remove/insert detection and enable support to read the write-protect pin.

Gustavo de-staged the sync_file framework (Android Sync framework) that will be used to add explicit fencing support to the graphics pipeline and started a work to clean up usage of legacy vblank helpers.

Helen Koike created a separated module for the V4L2 Test Pattern Generator and fixed return errors on the pipeline validation code while Robert Foss improved the DRM documentation and fixed drm/vc4 (Raspberry Pi) when there is already a pending update when calling atomic_commit.

Tomeu fixed two Rockchip issues while working on the intel-gpu-tools support for other platforms.

Enric Balletbo i Serra (6):

Gustavo Padovan (22):

Helen Koike (3):

Robert Foss (3):

Tomeu Vizoso (2):

by Gustavo Padovan at July 26, 2016 03:19 PM

June 30, 2016

Nicolas Dufresne

GStreamer Echo Canceller

For a long time I believed that echo cancellers had no place inside GStreamer. The theory was that GStreamer was too high level and would never be able to provide accurate enough delay information for any canceller to work. With fairly simple test, I could quickly confirm that the reported latency is often off by a period (generally 10ms). This isn’t strictly GStreamer’s fault and is not in any ways catastrophic for general playback experience.

With the apparition of WebRTC in browsers, it most likely became apparent that to be cross-platform browsers needed to have their own canceller. That’s exactly what happened in libWebRTC (former libjingle, used in both Firefox and Chrome to implement WebRTC). They implemented an echo canceller that accept an approximate delay and this changes everything for GStreamer.

At Collabora, I recently had the opportunity to implement this WebRTC Audio Processing based echo canceller. The main motivation was that the canceller on the hardware DSP we had didn’t work due to a hardware bug. A lot of those boards had been produced and no rework was possible. To save these boards, we decided to try with a software echo canceller. Even though it was using a fair amount of CPU, the experiment was a success. I have then clean-up the code and the new elements are now available in GStreamer Plugins Bad.

How does it work ?

Echo

The first step is to understand what is the echo. In a phone call with loud speaker, what happens is that your microphone records both your voice and the far end voices. The side effect, is that you are sending to the far end listeners both your voice along with a bad copy of their voices a moment before (the echo). To avoid this echo, you need to monitor the far end stream that you are playing back and “subtract” it from the recorded stream. In practice, it’s much more complex work, since the signal is deteriorated by the speaker and the microphone. You also need to figure-out the delays and hint the canceller, otherwise you may end-up with a terrible startup time or it may simply not work.

The implementation was greatly inspired from an experiment Olivier Crête did in 2008 using Speex DSP. I must admit, I never really understood his way of synchronizing the streams and literally ignored pretty much all the code that wasn’t GStreamer specific. The design works this way, you have a DSP element (webrtcdsp) that process the recorded stream and a probe element (webrtcechoprobe) that analyses the far end stream (before playing it back). Due to WebRTC library limitation, those two elements will transform the input buffer into chunk of 10ms. This is done using GstAdapter help. On the probe side, we push buffers in the adapter with timestamp transformed to running time. This time, plus the pipeline latency, gives us the moment in running time when the buffer should be heard by the microphone. We then synchronize the far end data against the recorded data and then let WebRTC Audio Processing library do it’s magic. A simple way of testing the element is by using an echo loop.

  gst-launch-1.0 pulsesrc ! webrtcdsp ! webrtcechoprobe ! pulsesink

Without the canceller, this pipeline would create a lot of echo, and probably end with loud feedback if your microphone volume is high enough. With the canceller, you should instead ear only one echo. It behaves a bit like a sound monitor but with too high latency and the side effect of fading in and out monotonic frequencies. After all, this is not what the algorithm have been design for. Try it in your real audio call application, that’s where you will most likely get the best results.

Before I conclude, there is a good reason why I called the element DSP rather then AEC. WebRTC Audio Processing is much more then just an echo canceller. In fact, it implements a wide variety of filters, noise suppressor, voice activity detection, etc. Currently we enable of subset of it, but I’m definitely looking forward enabling more (if not all) features from this library. I also encourage contributions. This works was only possible because of the great effort Arun Raghavan have put into extracting the echo canceller form the WebRTC project, create a standalone library usable by all. If you are interested about what cool feature could be added in the future, have a look at Arun’s blog about beamforming. And last one, thanks to my colleague who had to suffer me speaking with my computer listening to my echo for few weeks.

by Nicolas at June 30, 2016 10:14 PM

June 23, 2016

Frederic Plourde

SVVR2016

20160429_122440

I’ve been fortunate enough lately to attend the largest virtual reality professional event/conference : SVVR. This virtual reality conference’s been held each year in the Silicon Valley for 3 years now. This year, it showcased more than 100 VR companies on the exhibit floor and welcomed more than 1400 VR professionals and enthusiasts from all around the world. As a VR enthusiast myself, I attended the full 3-day conference and met most of the exhibitors and I’d like to summarize my thoughts, and the things I learned below, grouped under various themes. This post is by no means exhaustive and consists of my own, personal opinions.

CONTENT FOR VR

I realize that content creation for VR is really becoming the one area where most players will end up working. Hardware manufacturers and platform software companies are building the VR infrastructure as we speak (and it’s already comfortably usable), but as we move along and standards become more solid, I’m pretty sure we’re going to see lots and lots of new start-ups in the VR Content world, creating immersive games, 360 video contents, live VR events, etc… Right now, the realms of deployment possibilities for a content developer is not really elaborate. The vast majority of content creators are targeting the Unity3D plug-in, since it’s got built-in support for virtually all VR devices there is on the market like the Oculus family of headsets, HTC Vive, PlayStation VR, Samsung’s GearVR and even generic D3D or OpenGL-based applications on PC/Mac/Linux.

2 types of content

There really is two main types of VR content out there. First, 3D virtual artificially-generated content and 360 real-life captured content.

stereoVR

The former being what we usually refer to when thinking about VR, that is, computer-generated 3D worlds, e.g. in games, in which VR user can wander and interact. This is usually the kind of contents used in VR games, but also in VR applications, like Google’s great drawing app called TiltBrush (more info below). Click here to see a nice demo video!

newThe latter is everything that’s not generated but rather “captured” from real-life and projected or rendered in the VR space with the use of, most commonly, spherical projections and post-processing stitching and filtering. Simply said, we’re talking about 360 videos here (both 2D and 3D). Usually, this kind of contents does not let VR users interact with the VR world as “immersively” as the computer-generated 3D contents. It’s rather “played-back” and “replayed” just like regular online television series, for example, except for the fact that watchers can “look around”.

At SVVR2016, there were so many exhibitors doing VR content… Like InnerVision VR, Baobab Studios, SculptVR, MiddleVR, Cubicle ninjas, etc… on the computer-generated side, and Facade TV, VR Sports, Koncept VR, etc… on the 360 video production side.

TRACKING

Personally, I think tracking is by far the most important factor when considering the whole VR user experience. You have to actually try the HTC Vive tracking system to understand. The HTC Vive uses two “Lighthouse” camera towers placed in the room to let you track a larger space, something close to 15′ x 15′ ! I tried it a lot of times and tracking always seemed to keep pretty solid and constant. With the Vive you can literally walk in the VR space, zig-zag, leap and dodge without losing detection. On that front, I think competition is doing quite poorly. For example, Oculus’ CV1 is only tracking your movement from the front and the tracking angle  is pretty narrow… tracking was often lost when I faced away just a little… disappointing!

Talking about tracking, one of the most amazing talks was Leap Motion CTO David Holz’s demo of his brand new ‘Orion’, which is a truly impressive hand tracking camera with very powerful detection algos and very, very low latency. We could only “watch” David interact, but it looked so natural !  Check it out for yourself !

AUDIO

Audio is becoming increasingly crucial to the VR work flow since it adds so much to the VR experience. It is generally agreed in the VR community that awesome, well 3D-localised audio that seems “real” can add a lot of realism even to the visuals. At SVVR2016, there were a few audio-centric exhibitors like Ossic and Subpac. The former is releasing a kickstarter-funded 3D headset that lets you “pan” stereo audio content by rotating your head left-right. The latter is showcasing a complete body suit using tactile transducers and vibrotactile membranes to make you “feel” audio. The goal of this article is not to review specific technologies, but to discuss every aspects/domains part of the VR experience and, when it comes to audio, I unfortunately feel we’re still at the “3D sound is enough” level, but I believe it’s not.

See, proper audio 3D localization is a must of course. You obviously do no want to play a VR game where a dog appearing on your right is barking on your left!… nor do you want to have the impression a hovercraft is approaching up ahead when it’s actually coming from the back. Fortunately, we now have pretty good audio engines that correctly render audio coming from anywhere around you with good front/back discrimination. A good example of that is 3Dception from TwoBigEars. 3D specialization of audio channels is a must-have and yet, it’s an absolute minimum in my opinion. Try it for yourself ! Most of today’s VR games have coherent sound, spatially, but most of the time, you just do not believe sound is actually “real”. Why ?

Well, there are a number of reasons going from “limited audio diversity” (limited number of objects/details found in audio feed… like missing tiny air flows/sounds, user’s respiration or room’s ambient noise level) to limited  sound cancellation capability (ability to suppress high-pitched ambient sounds coming from the “outside” of the game) but I guess one of the most important factors is simply the way audio is recorded and rendered in our day-to day cheap stereo headset… A lot of promises is brought with binaural recording and stereo-to-binaural conversion algorithm. Binaural recording is a technique that records audio through two tiny omni microphones located under diaphragm structures resembling the human ears, so that audio is bounced back just like it is being routed through the human ears before reaching the microphones. The binaural audio experience is striking and the “stereo” feeling is magnified. It is very difficult to explain, you have to hear it for yourself. Talking about ear structure that has a direct impact on audio spectrum, I think one of the most promising techniques moving forward for added audio realism will be the whole geometry-based audio modeling field, where you can basically render sound as if it had actually been reflected on a computed-generated 3D geometry. Using such vrworks-audio-planmodels, a dog barking in front of a tiled metal shed will sound really differently than the same dog barking near a wooden chalet. The brain does pick up those tiny details and that’s why you find guys like Nvidia releasing their brand new “Physically Based Acoustic Simulator Engine” in VrWorks.

HAPTICS

Haptics is another very interesting VR domain that consists of letting users perceive virtual objects not through visual nor aural channels, but through touch. Usually, this sense of touch in VR experience is brought in by the use of special haptic wands that, using force feedback and other technologies, make you think that you are actually really pushing an object in the VR world.

You mostly find two types of haptic devices out there. Wand-based and Glove-based. Gloves for haptics are of course more natural to most users. It’s easy to picture yourself in a VR game trying to “feel” rain drops falling on your fingers or in an flight simulator, pushing buttons and really feeling them. However, by talking to many exhibitors at SVVR, it seems we’ll be stuck at the “feel button pushes” level for quite some time, as we’re very far from being able to render “textures” since spatial resolutions involved would simply be too high for any haptic technology that’s currently available. There are some pretty cool start-ups with awesome glove-based haptic technologies like Kickstarter-funded Neurodigital Technologies GloveOne or Virtuix’s Hands Omni.

Now, I’m not saying wand-based haptic technologies are outdated and not promising. In fact, I think they are more promising than gloves for any VR application that relies on “tools” like a painting app requiring you to use a brush or a remote-surgery medical application requiring you to use an actual scalpel ! When it comes to wands, tools and the like, the potential for haptic feedback is multiplied because you simply have more room to fit more actuators and gyros. I once tried an arm-based 3D joystick in a CAD application and I could swear I was really hitting objects with my design tool…  it was stunning !

SOCIAL

If VR really takes off in the consumer mass market someday soon, it will most probably be social. That’s something I heard at SVVR2016 (paraphrased) in the very interesting talk by David Baszucki titled : “Why the future of VR is social”. I mean, in essence, let’s just take a look at current technology appropriation nowadays and let’s just acknowledge that the vast majority of applications rely on the “social” aspect, right ? People want to “connect”, “communicate” and “share”. So when VR comes around, why would it be suddenly different? Of course, gamers will want to play really immersive VR games and workers will want to use VR in their daily tasks to boost productivity, but most users will probably want to put on their VR glasses to talk to their relatives, thousands of miles away, as if they were sitting in the same room. See ? Even the gamers and the workers I referred to above will want to play or work “with other real people”. No matter how you use VR, I truly believe the social factor will be one of the most important ones to consider to build successful software. At SVVR 2016, I discovered a very interesting start-up that focused on the social VR experience. With mimesys‘s telepresence demo, using a HTC Vive controller, they had me collaborate on a painting with a “real” guy hooked to the same system, painting from his home apartment in France, some 9850 km away and I had a pretty good sense of his “presence”. The 3D geometry and rendered textures were not perfect, but it was good enough to have a true collaboration experience !

MOVING FORWARD

We’re only at the very beginning of this very exciting journey through Virtual Reality and it’s really difficult to predict what VR devices will look like in only 3-5 years from now because things are just moving so quickly… An big area I did not cover in my post and that will surely change of lot of parameters moving forward in the VR world is… AR – Augmented Reality:) Check out what’s MagicLeap‘s up to these days !

 

 


by fredinfinite23 at June 23, 2016 12:04 PM

June 22, 2016

Philip Withnall

GTK+ hackfest 2016

A dozen GNOME hackers invaded the Red Hat office in Toronto last week, to spend four days planning the next year of work on our favourite toolkit, GTK+; and to think about how Flatpak applications can best integrate with the rest of the desktop.

What did we do?

  • Worked out an approach for versioning GTK+ in future, to improve the balance between stability and speed of development. This has turned into a wiki page.
  • I demoed Dunfell and added support for visualising GTasks to it. I don’t know how much time I will have for it in the near future, so help and feedback are welcome.
  • There was a detailed discussion of portals for Flatpak, including lots of use cases, and the basics of a security design were decided which allows the most code reuse while also separating functionality. Simon has written more about this.
  • I missed some of the architectural discussion about the future of GTK+ (including moving some classes around, merging some things and stripping out some outdated things), but I believe Benjamin had useful discussions with people about it.
  • Allan, Philip, Mike and I looked at using hotdoc for developer.gnome.org, and possible layouts for a new version of the site. Christian spent some time thinking about integration of documentation into GNOME Builder.
  • Allison did a lot of blogging, and plotted with Alex to add some devious new GVariant functionality to make everyone’s lives easier when writing parsers — I’ll leave her to blog about it.

Thanks to Collabora for sending me along to take part!

After the hackfest, I spent a few days exploring Toronto, and as a result ended up very sunburned.

Busy at work in the hackfest room. Totem pole in the Royal Ontario Museum. The backstory for this one was trippy. The distillery district.

by Philip Withnall at June 22, 2016 05:42 PM

June 20, 2016

Simon McVittie

GTK Hackfest 2016

I'm back from the GTK hackfest in Toronto, Canada and mostly recovered from jetlag, so it's time to write up my notes on what we discussed there.

Despite the hackfest's title, I was mainly there to talk about non-GUI parts of the stack, and technologies that fit more closely in what could be seen as the freedesktop.org platform than they do in GNOME. In particular, I'm interested in Flatpak as a way to deploy self-contained "apps" in a freedesktop-based, sandboxed runtime environment layered over the Universal Operating System and its many derivatives, with both binary and source compatibility with other GNU/Linux distributions.

I'm mainly only writing about discussions I was directly involved in: lots of what sounded like good discussion about the actual graphics toolkit went over my head completely :-) More notes, mostly from Matthias Clasen, are available on the GNOME wiki.

In no particular order:

Thinking with portals

We spent some time discussing Flatpak's portals, mostly on Tuesday. These are the components that expose a subset of desktop functionality as D-Bus services that can be used by contained applications: they are part of the security boundary between a contained app and the rest of the desktop session. Android's intents are a similar concept seen elsewhere. While the portals are primarily designed for Flatpak, there's no real reason why they couldn't be used by other app-containment solutions such as Canonical's Snap.

One major topic of discussion was their overall design and layout. Most portals will consist of a UX-independent part in Flatpak itself, together with a UX-specific implementation of any user interaction the portal needs. For example, the portal for file selection has a D-Bus service in Flatpak, which interacts with some UX-specific service that will pop up a standard UX-specific "Open" dialog — for GNOME and probably other GTK environments, that dialog is in (a branch of) GTK.

A design principle that was reiterated in this discussion is that the UX-independent part should do as much as possible, with the UX-specific part only carrying out the user interactions that need to comply with a particular UX design (in the GTK case, GNOME's design). This minimizes the amount of work that needs to be redone for other desktop or embedded environments, while still ensuring that the other environments can have their chosen UX design. In particular, it's important that, as much as possible, the security- and performance-sensitive work (such as data transport and authentication) is shared between all environments.

The aim is for portals to get the user's permission to carry out actions, while keeping it as implicit as possible, avoiding an "are you sure?" step where feasible. For example, if an application asks to open a file, the user's permission is implicitly given by them selecting the file in the file-chooser dialog and pressing OK: if they do not want this application to open a file at all, they can deny permission by cancelling. Similarly, if an application asks to stream webcam data, the UX we expect is for GNOME's Cheese app (or a similar non-GNOME app) to appear, open the webcam to provide a preview window so they can see what they are about to send, but not actually start sending the stream to the requesting app until the user has pressed a "Start" button. When defining the API "contracts" to be provided by applications in that situation, we will need to be clear about whether the provider is expected to obtain confirmation like this: in most cases I would anticipate that it is.

One security trade-off here is that we have to have a small amount of trust in the providing app. For example, continuing the example of Cheese as a webcam provider, Cheese could (and perhaps should) be a contained app itself, whether via something like Flatpak, an LSM like AppArmor or both. If Cheese is compromised somehow, then whenever it is running, it would be technically possible for it to open the webcam, stream video and send it to a hostile third-party application. We concluded that this is an acceptable trade-off: each application needs to be trusted with the privileges that it needs to do its job, and we should not put up barriers that are easy to circumvent or otherwise serve no purpose.

The main (only?) portal so far is the file chooser, in which the contained application asks the wider system to show an "Open..." dialog, and if the user selects a file, it is returned to the contained application through a FUSE filesystem, the document portal. The reference implementation of the UX for this is in GTK, and is basically a GtkFileChooserDialog. The intention is that other environments such as KDE will substitute their own equivalent.

Other planned portals include:

  • image capture (scanner/camera)
  • opening a specified URI
    • this needs design feedback on how it should work for non-http(s)
  • sharing content, for example on social networks (like Android's Sharing menu)
  • proxying joystick/gamepad input (perhaps via Wayland or FUSE, or perhaps by modifying libraries like SDL with a new input source)
  • network proxies (GProxyResolver) and availability (GNetworkMonitor)
  • contacts/address book, probably vCard-based
  • notifications, probably based on freedesktop.org Notifications
  • video streaming (perhaps using Pinot, analogous to PulseAudio but for video)

Environment variables

GNOME on Wayland currently has a problem with environment variables: there are some traditional ways to set environment variables for X11 sessions or login shells using shell script fragments (/etc/X11/Xsession.d, /etc/X11/xinit/xinitrc.d, /etc/profile.d), but these do not apply to Wayland, or to noninteractive login environments like cron and systemd --user. We are also keen to avoid requiring a Turing-complete shell language during session startup, because it's difficult to reason about and potentially rather inefficient.

Some uses of environment variables can be dismissed as unnecessary or even unwanted, similar to the statement in Debian Policy §9.9: "A program must not depend on environment variables to get reasonable defaults." However, there are two common situations where environment variables can be necessary for proper OS integration: search-paths like $PATH, $XDG_DATA_DIRS and $PYTHONPATH (particularly necessary for things like Flatpak), and optionally-loaded modules like $GTK_MODULES and $QT_ACCESSIBILITY where a package influences the configuration of another package.

There is a stopgap solution in GNOME's gdm display manager, /usr/share/gdm/env.d, but this is gdm-specific and insufficiently expressive to provide the functionality needed by Flatpak: "set XDG_DATA_DIRS to its specified default value if unset, then add a couple of extra paths".

pam_env comes closer — PAM is run at every transition from "no user logged in" to "user can execute arbitrary code as themselves" — but it doesn't support .d fragments, which are required if we want distribution packages to be able to extend search paths. pam_env also turns off per-user configuration by default, citing security concerns.

I'll write more about this when I have a concrete proposal for how to solve it. I think the best solution is probably a PAM module similar to pam_env but supporting .d directories, either by modifying pam_env directly or out-of-tree, combined with clarifying what the security concerns for per-user configuration are and how they can be avoided.

Relocatable binary packages

On Windows and OS X, various GLib APIs automatically discover where the application binary is located and use search paths relative to that; for example, if C:\myprefix\bin\app.exe is running, GLib might put C:\myprefix\share into the result of g_get_system_data_dirs(), so that the application can ask to load app/data.xml from the data directories and get C:\myprefix\share\app\data.xml. We would like to be able to do the same on Linux, for example so that the apps in a Flatpak or Snap package can be constructed from RPM or dpkg packages without needing to be recompiled for a different --prefix, and so that other third-party software packages like the games on Steam and gog.com can easily locate their own resources.

Relatedly, there are currently no well-defined semantics for what happens when a .desktop file or a D-Bus .service file has Exec=./bin/foo. The meaning of Exec=foo is well-defined (it searches $PATH) and the meaning of Exec=/opt/whatever/bin/foo is obvious. When this came up in D-Bus previously, my assertion was that the meaning should be the same as in .desktop files, whatever that is.

We agreed to propose that the meaning of a non-absolute path in a .desktop or .service file should be interpreted relative to the directory where the .desktop or .service file was found: for example, if /opt/whatever/share/applications/foo.desktop says Exec=../../bin/foo, then /opt/whatever/bin/foo would be the right thing to execute. While preparing a mail to the freedesktop and D-Bus mailing lists proposing this, I found that I had proposed the same thing almost 2 years ago... this time I hope I can actually make it happen!

Flatpak and OSTree bug fixing

On the way to the hackfest, and while the discussion moved to topics that I didn't have useful input on, I spent some time fixing up the Debian packaging for Flatpak and its dependencies. In particular, I did my first upload as a co-maintainer of bubblewrap, uploaded ostree to unstable (with the known limitation that the grub, dracut and systemd integration is missing for now since I haven't been able to test it yet), got most of the way through packaging Flatpak 0.6.5 (which I'll upload soon), cherry-picked the right patches to make ostree compile on Debian 8 in an effort to make backports trivial, and spent some time disentangling a flatpak test failure which was breaking the Debian package's installed-tests. I'm still looking into ostree test failures on little-endian MIPS, which I was able to reproduce on a Debian porterbox just before the end of the hackfest.

OSTree + Debian

I also had some useful conversations with developers from Endless, who recently opened up a version of their OSTree build scripts for public access. Hopefully that information brings me a bit closer to being able to publish a walkthrough for how to deploy a simple Debian derivative using OSTree (help with that is very welcome of course!).

GTK life-cycle and versioning

The life-cycle of GTK releases has already been mentioned here and elsewhere, and there are some interesting responses in the comments on my earlier blog post.

It's important to note that what we discussed at the hackfest is only a proposal: a hackfest discussion between a subset of the GTK maintainers and a small number of other GTK users (I am in the latter category) doesn't, and shouldn't, set policy for all of GTK or for all of GNOME. I believe the intention is that the GTK maintainers will discuss the proposals further at GUADEC, and make a decision after that.

As I said before, I hope that being more realistic about API and ABI guarantees can avoid GTK going too far towards either of the possible extremes: either becoming unable to advance because it's too constrained by compatibility, or breaking applications because it isn't constrained enough. The current situation, where it is meant to be compatible within the GTK 3 branch but in practice applications still sometimes break, doesn't seem ideal for anyone, and I hope we can do better in future.

Acknowledgements

Thanks to everyone involved, particularly:

  • Matthias Clasen, who organised the hackfest and took a lot of notes
  • Allison Lortie, who provided on-site cat-herding and led us to some excellent restaurants
  • Red Hat Inc., who provided the venue (a conference room in their Toronto office), snacks, a lot of coffee, and several participants
  • my employers Collabora Ltd., who sponsored my travel and accomodation

June 20, 2016 06:37 PM

June 15, 2016

Andrew Shadura

Migrate to systemd without a reboot

Yesterday I was fixing an issue with one of the servers behind kallithea-scm.org: the hook intended to propagage pushes from Our Own Kallithea to Bitbucket stopped working. Until yesterday, that server was using Debian’s flavour of System V init and djb’s dæmontools to keep things running. To make the hook asynchronous, I wrote a service to be managed to dæmontools, so that concurrency issued would be solved by it. However, I didn’t implement any timeouts, so when last week wget froze while pulling Weblate’s hook, there was nothing to interrupt it, so the hook stopped working since dæmontools thought it’s already running and wouldn’t re-trigger it. Killing wget helped, but I decided I need to do something with it to prevent the situation from happening in the future.

I’ve been using systemd at work for the last year, so I am now confident I’m happier with systemd than with dæmontools, so I decided to switch the server to systemd. Not surprisingly, I prepared unit files in about 5 minutes without having to look into the manuals again, while with dæmontools I had to check things every time I needed to change something. The tricky thing was the switch itself. It is a virtual server, presumably running in Xen, and I don’t have access to the console, so if I bork break something, I need to summon Bradley Kuhn or someone from Conservancy, who’s kindly donated the server to the project. In any case, I decided to attempt to upgrade without a reboot, so that I have more options to roll back my changes in the case things go wrong.

After studying the manpages of both systemd’s init and sysvinit’s init, I realised I can install systemd as /sbin/init and ask already running System V init to re-exec. However, systemd’s init can’t talk to System V init, so before installing systemd I made a backup on it. It’s also important to stop all running services (except probably ssh) to make sure systemd doesn’t start second instances of each. And then: /tmp/init u — and we’re running systemd! A couple of additional checks, and it’s safe to reboot.

Only when I did all that I realised that in the case of systemd not working I’d probably not be able to undo my changes if my connection interrupted. So, even though at the end it worked, probably it’s not a good idea to perform such manipulations when you don’t have an alternative way to connect to the server :)

June 15, 2016 11:51 AM

June 14, 2016

Simon McVittie

GTK versioning and distributions

Allison Lortie has provoked a lot of comment with her blog post on a new proposal for how GTK is versioned. Here's some more context from the discussion at the GTK hackfest that prompted that proposal: there's actually quite a close analogy in how new Debian versions are developed.

The problem we're trying to address here is the two sides of a trade-off:

  • Without new development, a library (or an OS) can't go anywhere new
  • New development sometimes breaks existing applications

Historically, GTK has aimed to keep compatible within a major version, where major versions are rather far apart (GTK 1 in 1998, GTK 2 in 2002, GTK 3 in 2011, GTK 4 somewhere in the future). Meanwhile, fixing bugs, improving performance and introducing new features sometimes results in major changes behind the scenes. In an ideal world, these behind-the-scenes changes would never break applications; however, the world isn't ideal. (The Debian analogy here is that as much as we aspire to having the upgrade from one stable release to the next not break anything at all, I don't think we've ever achieved that in practice - we still ask users to read the release notes, even though ideally that wouldn't be necessary.)

In particular, the perceived cost of doing a proper ABI break (a fully parallel-installable GTK 4) means there's a strong temptation to make changes that don't actually remove or change C symbols, but are clearly an ABI break, in the sense that an application that previously worked and was considered correct no longer works. A prominent recent example is the theming changes in GTK 3.20: the ABI in terms of functions available didn't change, but what happens when you call those functions changed in an incompatible way. This makes GTK hard to rely on for applications outside the GNOME release cycle, which is a problem that needs to be fixed (without stopping development from continuing).

The goal of the plan we discussed today is to decouple the latest branch of development, which moves fast and sometimes breaks API, from the API-stable branches, which only get bug fixes. This model should look quite familiar to Debian contributors, because it's a lot like the way we release Debian and Ubuntu.

In Debian, at any given time we have a development branch (testing/unstable) - currently "stretch", the future Debian 9. We also have some stable branches, of which the most recent are Debian 8 "jessie" and Debian 7 "wheezy". Different users of Debian have different trade-offs that lead them to choose one or the other of these. Users who value stability and want to avoid unexpected changes, even at a cost in terms of features and fixes for non-critical bugs, choose to use a stable release, preferably the most recent; they only need to change what they run on top of Debian for OS API changes (for instance webapps, local scripts, or the way they interact with the GUI) approximately every 2 years, or perhaps less often than that with the Debian-LTS project supporting non-current stable releases. Meanwhile, users who value the latest versions and are willing to work with a "moving target" as a result choose to use testing/unstable.

The GTK analogy here is really quite close. In the new versioning model, library users who value stability over new things would prefer to use a stable-branch, ideally the latest; library users who want the latest features, the latest bug-fixes and the latest new bugs would use the branch that's the current focus of development. In practice we expect that the latter would be mostly GNOME projects. There's been some discussion at the hackfest about how often we'd have a new stable-branch: the fastest rate that's been considered is a stable-branch every 2 years, similar to Ubuntu LTS and Debian, but there's no consensus yet on whether they will be that frequent in practice.

How many stable versions of GTK would end up shipped in Debian depends on how rapidly projects move from "old-stable" to "new-stable" upstream, how much those projects' Debian maintainers are willing to patch them to move between branches, and how many versions the release team will tolerate. Once we reach a steady state, I'd hope that we might have 1 or 2 stable-branched versions active at a time, packaged as separate parallel-installable source packages (a lot like how we handle Qt). GTK 2 might well stay around as an additional active version just from historical inertia. The stable versions are intended to be fully parallel-installable, just like the situation with GTK 1.2, GTK 2 and GTK 3 or with the major versions of Qt.

For the "current development" version, I'd anticipate that we'd probably only ship one source package, and do ABI transitions for one version active at a time, a lot like how we deal with libgnome-desktop and the evolution-data-server family of libraries. Those versions would have parallel-installable runtime libraries but non-parallel-installable development files, again similar to libgnome-desktop.

At the risk of stretching the Debian/Ubuntu analogy too far, the intermediate "current development" GTK releases that would accompany a GNOME release are like Ubuntu's non-LTS suites: they're more up to date than the fully stable releases (Ubuntu LTS, which has a release schedule similar to Debian stable), but less stable and not supported for as long.

Hopefully this plan can meet both of its goals: minimize breakage for applications, while not holding back the development of new APIs.

June 14, 2016 01:56 AM

June 06, 2016

Helen Koike

[How to] Speed up compilation time with Icecc

With Icecc (Icecream) you can use other machines in your local network to compile for you.

If you have a single machine, usually you would do (for a quad-core machine) something like:

make -j4

This command will generate four compilation jobs and distribute to your CPU cores, compiling the jobs in parallel.

But if you have another machine in your local network, Icecc let you use the cores of this other machine too. If this other machine is dual core, you could run:

make -j6

How it works?


When you call make -jN, instead of calling the classic GNU Gcc, we will "trick" the make so it will call another "Gcc" binary defined by the Icecc (by changing the PATH).

The make command will generate the jobs and call the Icecc Gcc that will send the source files to the scheduler that will forward the jobs to the remote machines (or to him self or to the machine who started the compilation).



How to setup the network?


Easy on Ubuntu:

* Do the following commands in every computer in the network:

$ sudo apt-get install icecc

$ export PATH=/usr/lib/icecc/bin:$PATH

Check if the gcc in the /usr/lib/icecc is being used:

$ which gcc
/usr/lib/icecc/bin/gcc

Let say that the IP address of the machine you chose to be the scheduler is 192.168.0.34. Edit the file /etc/icecc/icecc.conf and change the follow variables (still in all the machines in the network):

ICECC_NETNAME="icecc_net"
ICECC_ALLOW_REMOTE="yes"
ICECC_SCHEDULER_HOST="192.168.0.34"

Reset the Icecc Deamon

sudo service iceccd restart

* Do the following command in the the scheduler machine 192.168.0.34:

sudo service icecc-scheduler start

How can I know if it works?


Install and Run the monitor:

$ sudo apt-get install icecc icecc-monitor

$ icemon -n icecc_net

You should see all machines and an indicator saying that the network is online:


In this case I have 3 machines, the first two have four cores and the last one just one core.

When I compile something with make -j9 I see the Jobs number growing and the slots being filled.

Done!!!

CCache with Icecc (edited):

To  speed up even more your compilation time, you can setup CCache (explained in the last post).

The general ideia is: check in a local cache first (using CCache) if the source files have been already compiled, if not, then give the job to Icecc.

When using with CCache, you don't need to add Icecc in the PATH, we use CCACHE_PREFIX instead:

$ export CCACHE_PREFIX=icecc

$ echo $PATH
/usr/lib/ccache:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games

$ which gcc
/usr/lib/ccache/gcc

by Helen Fornazier (noreply@blogger.com) at June 06, 2016 01:13 AM

[How to] Speed up compilation time with CCache

CCache stores the compiled version of your source files into a cache. If you tries to compile the same source files again, you will get a cache hit and will retrieve the compiled objects from the cache instead of compiling them again.

How it works?


 Instead of calling the GNU Gcc compiler directly, you will point your PATH to another Gcc binary provided by CCache that will check the cache first and them call the GNU Gcc if necessary:



How to install it?


Easy on Ubuntu:

$ sudo apt-get install ccache

Then change you PATH to point to the Ccache gcc (not just gcc) version:

$ export PATH="/usr/lib/ccache:$PATH"

Check with:

$ which gcc
/usr/lib/ccache/gcc

Done!!!

How can I know if it works? 


You can re-compile something and check if CCache is working with ccache -s command, you should have some cache hits:

You can increase/decrease your cache size with:

$ ccache --max-size=5G


Troubleshooting: Re-compiling Linux kernel never hit the cache?

Check if the flag CONFIG_LOCALVERSION_AUTO is set in you menuconfig, disable it and try again.
This flag seems to change a core header file as it appends the git version to the version string automatically, forcing CCache to recompile almost it all.


CCache with Icecc (edited):

If you want to use CCache with Icecc (that I'll explain about it in another post) to speed up even more your compilation time, use CCACHE_PREFIX=icecc (thanks Joel Rosdahl who commented about this)

$ export CCACHE_PREFIX=icecc

NOTE: You don't need to add icecc /usr/lib/icecc/bin in your PATH

by Helen Fornazier (noreply@blogger.com) at June 06, 2016 01:07 AM

June 05, 2016

Simon McVittie

Flatpak in Debian

Quite a lot has happened in xdg-app since last time I blogged about it. Most noticeably, it isn't called xdg-app any more, having been renamed to Flatpak. It is now available in Debian experimental under that name, and the xdg-app package that was briefly there has been removed. I'm currently in the process of updating Flatpak to the latest version 0.6.4.

The privileged part has also spun off into a separate project, Bubblewrap, which recently had its first release (0.1.0). This is intended as a common component with which unprivileged users can start a container in a way that won't let them escalate privileges, like a more flexible version of linux-user-chroot.

Bubblewrap has also been made available in Debian, maintained by Laszlo Boszormenyi (also maintainer of linux-user-chroot). Yesterday I sent a patch to update Laszlo's packaging for 0.1.0. I'm hoping to become a co-maintainer to upload that myself, since I suspect Flatpak and Bubblewrap might need to track each other quite closely. For the moment, Flatpak still uses its own internal copy of Bubblewrap, but I consider that to be a bug and I'd like to be able to fix it soon.

At some point I also want to experiment with using Bubblewrap to sandbox some of the game engines that are packaged in Debian: networked games are a large attack surface, and typically consist of the sort of speed-optimized C or C++ code that is an ideal home for security vulnerabilities. I've already made some progress on jailing game engines with AppArmor, but making sensitive files completely invisible to the game engine seems even better than preventing them from being opened.

Next weekend I'm going to be heading to Toronto for the GTK Hackfest, primarily to to talk to GNOME and Flatpak developers about their plans for sandboxing, portals and Flatpak. Hopefully we can make some good progress there: the more I know about the state of software security, the less happy I am with random applications all being equally privileged. Newer display technologies like Wayland and Mir represent an opportunity to plug one of the largest holes in typical application containerization, which is a major step in bringing sandboxes like Flatpak and Snap from proof-of-concept to a practical improvement in security.

Other next steps for Flatpak in Debian:

  • To get into the next stable release (Debian 9), Flatpak needs to move from experimental into unstable and testing. I've taken the first step towards that by uploading libgsystem to unstable. Before Flatpak can follow, OSTree also needs to move.
  • Now that it's in Debian, please report bugs in the usual Debian way or send patches to fix bugs: Flatpak, OSTree, libgsystem.
  • In particular, there are some OSTree bugs tagged help. I'd appreciate contributions to the OSTree packaging from people who are interested in using it to deploy dpkg-based operating systems - I'm primarily looking at it from the Flatpak perspective, so the boot/OS side of it isn't so well tested. Red Hat have rpm-ostree, and I believe Endless do something analogous to build OS images with dpkg, but I haven't had a chance to look into that in detail yet.
  • Co-maintainers for Flatpak, OSTree, libgsystem would also be very welcome.

June 05, 2016 11:24 AM

June 03, 2016

memcpy.io - Robert Foss

Running Weston on a Raspbian

Alt text

Progress in the VC4 graphics camp and the Wayland camp now enables us to run Weston on top of the drm backend for VC4 platforms. Previously software acceleration using pixman was needed, but this is no longer the case.

Additionally the rpi backend for weston is now being removed since it has been obsoleted by the improved drm layer.

Let's explore running hardware accelerated Weston on the Raspberry Pi.

Building Linux kernel

A comprehensive guide for building a recent Linux kernel for Raspberry Pi boards has been written by the Raspberry Pi foundation and is available here.

As of this writing the guide helps you build a v4.4 kernel which is good enough for our purposes.

Set up alternative install location

These build instructions are based on the Wayland instructions from freedesktop.org, but altered to target VC4 and Raspbian.

You probably don't want to install experimental builds of software among the usual software of your operating system, so let's define a prefix for where to install our builds.

# Change WLD to any location you like
export WLD=~/local
export LD_LIBRARY_PATH=$WLD/lib
export PKG_CONFIG_PATH=$WLD/lib/pkgconfig/:$WLD/share/pkgconfig/
export PATH=$WLD/bin:$PATH
export ACLOCAL_PATH=$WLD/share/aclocal
export ACLOCAL="aclocal -I $ACLOCAL_PATH"

# Needed by autotools
mkdir -p $WLD/share/aclocal

Installing dependencies

Start by installing the build dependencies of mesa, weston and wayland.

# Enable source packages
sudo sed -e "s/#\sdeb-src/deb-src/g" -i /etc/apt/sources.list
sudo apt update

The above step can alternatively be completed using the GUI of your package manager, by enabling source packages.

# Install build dependencies of mesa
sudo apt-get build-dep mesa

# Install build dependencies of wayland/weston
sudo apt-get install \
  libevdev libevdev-dev \
  libwacom libwacom-dev \
  libxkbcommon libxkbcommon-dev

Building Mesa

Configure and compile mesa with vc4, wayland and EGL support.

git clone git://anongit.freedesktop.org/mesa/mesa
cd mesa
./autogen.sh --prefix=$WLD \
  --enable-gles2 \
  --with-egl-platforms=x11,wayland,drm \
  --enable-gbm --enable-shared-glapi \
  --with-gallium-drivers=vc4 \
  --without-dri-drivers \
  --disable-va \
  --disable-vdpau \
  --disable-xvmc \
  --disable-omx
make -j4 && make install

Building Weston and dependencies

Weston and Wayland have a number of dependencies that also need to be fetched and built.

Wayland

Weston is a Wayland compositor, so we're going to have to build Wayland.

git clone git://anongit.freedesktop.org/wayland/wayland
cd wayland
./autogen.sh --prefix=$WLD
make -j4 && make install
cd ..

git clone git://anongit.freedesktop.org/wayland/wayland-protocols
cd wayland-protocols
./autogen.sh --prefix=$WLD
make install
cd ..

libinput

libinput is a dependency of Wesron, handles input devices like keyboards, touchpads and mice.

git clone git://anongit.freedesktop.org/wayland/libinput
cd libinput
./autogen.sh --prefix=$WLD
make -j4 && make install
cd ..

Weston

Finally we've built all of the dependencies of Weston and can now build it.

git clone git://anongit.freedesktop.org/wayland/weston
cd weston
./autogen.sh --prefix=$WLD \
  --disable-libunwind
make -j4 &&
sudo make install
cd ..

Running Weston

That wasn't so bad, it took a little while, but now we're ready to start Weston. Now, let's fire up a (virtual) terminal. Make sure that you're not running an X terminal, ssh terminal or serial terminal.

Running weston in this way depends on logind.

# Make sure that $DISPLAY is unset.
unset DISPLAY

# And that $XDG_RUNTIME_DIR has been set and created.
if test -z "${XDG_RUNTIME_DIR}"; then
  export XDG_RUNTIME_DIR=/tmp/${UID}-runtime-dir
  if ! test -d "${XDG_RUNTIME_DIR}"; then
    mkdir "${XDG_RUNTIME_DIR}"
    chmod 0700 "${XDG_RUNTIME_DIR}"1
  fi
fi

# Run weston:
weston

Try weston applications

Now that we're running weston, let's try some applications. They're located in the top level directory of weston.

  • weston-terminal
  • weston-flower
  • weston-gears
  • weston-smoke
  • weston-image
  • weston-view
  • weston-resizor
  • weston-eventdemo

When you've started all of your favorite applications you can grab a screenshot by pressing Super + s, which will save wayland-screenshot.png in your home directory.

by Robert Foss at June 03, 2016 08:32 AM

Linux kernel development shell scripts

Alt text

While upstreaming kernel patches scripts/checkpatch.pl and scripts/get_maintainer.pl often come in handy. But to me the interface they provide is slightly bulky and rely on using patch files instead of git commits, which to me is a bit inconvenient.

These scripts are all meant to be included in .bashrc or .zshrc

scripts/checkpatch.pl helper

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
#!/bin/bash
function checkpatch {
  if [ -z ${1+x} ]; then
    exec git diff | scripts/checkpatch.pl --no-signoff -q -
  elif [[ $1 == *"cache"* ]]; then
    exec git diff --cached | scripts/checkpatch.pl --no-signoff -q -
  else
    NUM_COMMITS=$1
    exec git diff HEAD~$NUM_COMMITS..HEAD | scripts/checkpatch.pl --no-signoff -q -
  fi
}

The checkpatch script simple wraps the patch creation process and allows you to right away specify which

Example

~/work/linux $ checkpatch 15
WARNING: ENOSYS means 'invalid syscall nr' and nothing else
#349: FILE: drivers/tty/serial/sh-sci.c:3026:
+   if (IS_ERR(sciport->gpios) && PTR_ERR(sciport->gpios) != -ENOSYS)

total: 0 errors, 1 warnings, 385 lines checked

In this example the 15 last commits are checked against scripts/checkpatch.pl for correctness.

scripts/get_maintainer.pl helper

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
#!/bin/bash
function get_maintainer {
  NUM_COMMITS=$1

  MAINTAINERS=$(git format-patch HEAD~$NUM_COMMITS..HEAD --stdout | scripts/get_maintainer.pl)

  # Remove extraneous stats
  MAINTAINERS=$(echo "$MAINTAINERS" | sed 's/(.*//g')

  # Remove names from email addresses
  MAINTAINERS=$(echo "$MAINTAINERS" | sed 's/.*<//g')

  # Remove left over character
  MAINTAINERS=$(echo "$MAINTAINERS" | sed 's/>//g')

  echo "$MAINTAINERS" | while read email; do
    echo -n "--to=${email}  ";
  done
}

Example

~/work/linux $ get_maintainer 1
--to=gregkh@linuxfoundation.org  --to=jslaby@suse.com  --to=linux-serial@vger.kernel.org  --to=linux-kernel@vger.kernel.org

~/work/linux $ git send-email -1 $(get_maintainer 1)

by Robert Foss at June 03, 2016 08:32 AM

May 25, 2016

Olivier Crête

GStreamer Spring Hackfest 2016

After missing the last few GStreamer hackfests I finally managed to attend this time. It was held in Thessaloniki, Greece’s second largest city. The city is located by the sea side and the entire hackfest and related activities were either directly by the sea or just a couple blocks away.

Collabora was very well represented, with Nicolas, Mathieu, Lubosz also attending.

Nicolas concentrated his efforts on making kmssink and v4l2dec work together to provide zero-copy decoding and display on a Exynos 4 board without a compositor or other form of display manager. Expect a blog post soon  explaining how to make this all fit together.

Lubosz showed off his VR kit. He implemented a viewer for planar point clouds acquired from a Kinect. He’s working on a set of GStreamer plugins to play back spherical videos. He’s also promised to blog about all this soon!

Mathieu started the hackfest by investigating the intricacies of Albanian customs, then arrived on the second day in Thessaloniki and hacked on hotdoc, his new fancy documentation generation tool. He’ll also be posting a blog about it, however in the meantime you can read more about it here.

As for myself, I took the opportunity to fix a couple GStreamer bugs that really annoyed me. First, I looked into bug #766422: why glvideomixer and compositor didn’t work with RTSP sources. Then I tried to add a ->set_caps() virtual function to GstAggregator, but it turns out I first needed to delay all serialized events to the output thread to get predictable outcomes and that was trickier than expected. Finally, I got distracted by a bee and decided to start porting the contents of docs.gstreamer.com to Markdown and updating it to the GStreamer 1.0 API so we can finally retire the old GStreamer.com website.

I’d also like to thank Sebastian and Vivia for organising the hackfest and for making us all feel welcomed!

GStreamer Hackfest Venue

by ocrete at May 25, 2016 08:43 PM

May 17, 2016

Gustavo Padovan

Collabora contributions to Linux Kernel 4.6

Linux Kernel 4.6 was released this week, and a total of 9 Collabora engineers took part in its development, Collabora’s highest number of engineers contributing to a single Linux Kernel release yet. In total Collabora contributed 42 patches.

As part of Collabora’s continued commitment to further increase its participation to the Linux Kernel, Collabora is actively looking to expand its team of core software engineers. If you’d like to learn more, follow this link.

Here are some highlights of Collabora’s participation in Kernel 4.6:

Andrew Shadura fixed the number of buttons reported on the Pemount 6000 USB touchscreen controller, while Daniel Stone enabled BCM283x familiy devices in the ARM multi_v7_defconfig and Emilio López added module autoloading for a few sunxi devices.

Enric Balletbo i Serra added boot console output to AM335X(Sitara) and OMAP3-IGEP and fixed audio codec setup on AM335X using the right external clock. Martyn Welch added the USB device ID for the GE Healthcare cp210x serial device and renamed the reset reason of the Zodiac Watchdog.

Gustavo Padovan cleaned up the Android Sync Framework on the staging tree for further de-staging of the Sync File infrastructure, which will land in 4.7. Most of the work was removing interfaces that won’t be used in mainline. He also added vblank event support for atomic commits in the virtio DRM driver.

Peter Senna improved an error path and added some style fixes to the sisusbvga driver. While Sjoerd Simons enabled wireless on radxa Rock2 boards, fixed an issue withthe brcmfmac sdio driver sometimes timing out with a false positive and fixed some issues with Serial output on Renesas R-Car porter board.

Tomeu Vizoso changed driver_match_device() to return errors and in case of -EPROBE_DEFER queue the device for deferred probing, he also provided two fixes to Rockchip DRM driver as part of his work on making intel-gpu-tools work on other platforms.

Following is a list of all patches submitted by Collabora for this kernel release:

Andrew Shadura (1):

Daniel Stone (1):

Emilio López (4):

Enric Balletbo i Serra (3):

Gustavo Padovan (17):

Martyn Welch (2):

Peter Senna Tschudin (4):

Sjoerd Simons (6):

Tomeu Vizoso (4):

by Gustavo Padovan at May 17, 2016 03:39 PM

April 21, 2016

Tomeu Vizoso

Validating changes to KMS drivers with IGT

New DRM drivers are being added to almost each new kernel release, and because the mode setting API is so rich and complex, bugs do slip in that translate to differences in behaviour between drivers.

There have been previous attempts at writing test suites for validating changes and preventing regressions, but they have typically happened downstream and focused on the specific needs of specific products and limited to one or at most a few of different hardware platforms.

Writing these tests from scratch would have been an enormous amount of work, and gathering previous efforts and joining them wouldn't be much worth it because they were written using different test frameworks and in different programming languages. Also, there would be great overlap on the basic tests, and little would remain of the trickier stuff.

Of the existing test suites, the one with most coverage is intel-gpu-tools, used by the Intel graphics team. Though a big part is specific to the i915 driver, what uses the generic APIs is pretty much driver-independent and can be made to work with the other drivers without much effort. Also, Broadcom's Eric Anholt has already started adding tests for IOCTLs specific to the VideoCore-IV driver.

Collabora's Micah Fedke and Daniel Stone had added a facility for selecting DRM device files other than i915's and I improved the abstraction for creating buffers so it works for drivers without GEM buffers. Next I removed a bunch of superfluous dependencies on i915-only stuff and got a useful subset of tests to run on a Radxa Rock2 board (with the Rockchip 3288 SoC). Around half of these patches have been merged already and the other half are awaiting review. Meanwhile, Collabora's Robert Foss is running the ported tests on a Raspberry Pi 2 and has started sending patches to account for its peculiarities.

The next two big chunks of work are abstracting CRC checksums of frames (on drivers other than i915 this could be done with Google's Chamelium or with a board similar to Numato Opsis), and the buffer management API from libdrm that is currently i915-only (bufmgr). Something that will have to be dealt with in the future is abstracting the submittal of specific loads on the GPU as that's currently very much driver-specific.

Additionally, I will be scheduling jobs in our LAVA instance to run these tests on the boards we have in there.

Thanks to Google for sponsoring my time, to the Intel OTC folks for their support and reviews, and to Collabora for sponsoring Robert's, Micah's and Daniel's time.

by Tomeu Vizoso (noreply@blogger.com) at April 21, 2016 01:02 PM

April 16, 2016

Tollef Fog Heen

Blog moved, new tech

I moved my blog around a bit and it appears that static pages are now in favour, so I switched to that, by way of Hugo. CSS and such needs more tweaking, but it’ll make do for now.

As part of this, RSS feeds and such changed, if you want to subscribe to this (very seldomly updated) blog, use https://err.no/personal/blog/index.xml

April 16, 2016 08:42 PM

March 26, 2016

memcpy.io - Robert Foss

Coverpage template

Alt text

Coverpage is a single-page landing page built to showcase an idea or a product. To allow interested parties to get notified of updates, the template has mailchimp subscription integration.

A live version of the site can be found at coverpage.memcpy.io.

Sources

git clone https://github.com/robertfoss/coverpage.git

GitHub hosting

This template was built with the explicit intention of having it be hosted at GitHub in a gh-pages branch. Therefore it includes a Makefile for pushing copy of the current design to a gh-pages branch.

by Robert Foss at March 26, 2016 11:34 PM

March 16, 2016

Gustavo Padovan

Collabora contributions to Linux Kernel 4.5

Linux Kernel 4.5 was released earlier this week, and once again Collabora engineers played a role in its development. In addition to their current projects, seven Collabora engineers contributed a total of 33 patches to the new Kernel.

As part of its continued committment to further increase ts participation to the Linux Kernel, Collabora is looking to expand its team of core software engineers. If you’d like to learn more, follow this link.

Here are some highlights of Collabora’s participation in Kernel 4.5:

Daniel Stone improved i915 runtime WARN() messages and fixed an important issue in the component subsystem when component_add() fails. Danilo Cesar made the DRM Docbook ready for Markdown text.

Gustavo Padovan improved the pm_runtime management on the drm/exynos driver and started work on de-staging the Android Sync Framework. On Rockchip, Sjoerd Simons enabled IR receiver to RK3288 Radxa Rock 2 Square, added multi_v7_defconfig for Rockchip audio and enabled RK3288 SPDIF clocks to change their parent. On the net side, Sjoerd added a patch to turn carrier off on phy attach to avoid unknown states and another patch to add ethernet0 alias for the RK3288 to help u-boot find this device-node.

During his brief time with us at Collabora, Heiko Stübner added the dts file for the veyron-brain board, a shutdown callback to platform variant dwc2 devices for a special clock handling to avoid getting stuck on the reboot/poweroff process and multi_v7_defconfig support to Rockchip’s io-domain driver, crypto module and rk808 clkout module. He also enabled support for veyron minnie touchscreen, adjusted temperature limits on veyron-speedy and fixed the edp-24m clock to be associated to the internal 24MHz oscillator all the time.

Martyn Welch added a driver for the Zodiac Aerospace RAVE Watchdog Processor, while Tomeu Vizoso added a device_is_bound() helper function and setter for dev.pm_domain that comes with extra checkings. Tomeu also added a patch to allow USB devices to remain runtime-suspended when sleeping and another patch to optimize sleep by going direct_complete if driver has no prepare and PM callbacks. Lastly, Tomeu also fixed a freq issue on Tegra devfreq_dev_profile.target callback.

Following is a list of all patches submitted by Collabora for this kernel release:

Daniel Stone (3):

Danilo Cesar Lemes de Paula (1):

Gustavo Padovan (9):

Heiko Stübner (8):

Martyn Welch (2):

Sjoerd Simons (5):

Tomeu Vizoso (5):

by Gustavo Padovan at March 16, 2016 06:17 PM

March 01, 2016

Pekka Paalanen

Wayland has been accepted as a Google Summer of Code organization

Now is a high time to start discussing what you might want to do, for both student candidates and possible mentors.

Students, have a look at our project idea examples to get a feeling of what kind of projects you could propose. First you will need to contribute at least a small but significant patch to show that you understand the workflow, we have put some first task ideas together.

There are our application instructions for students. Of course all the pages are reachable from the Wayland GSoC wiki page and also the Wayland organization page.

If you want to become a mentor, please contact me or Kat, the contact details are on the Wayland GSoC wiki page.

Note, that students can also apply under the X.Org Foundation organization since Wayland is within their scope too and they also have other excellent graphics project ideas. You are welcome to submit your Wayland proposals to both projects.

by pq (noreply@blogger.com) at March 01, 2016 11:58 AM

February 26, 2016

Andrew Shadura

Phulud? No, Phulad.

If you bought an North India travel guide by Vanessa Betts and Victoria McCulloch, and tried to figure out where is ‘Phulud’ and how to get there from Deogarh (and how to get to Deogarh itself from Udaipur), don’t waste your time googling, as it’s not Phulud, but Phulad.

It does seem that the narrow gauge journey from Deogarh to Phulad is indeed beautiful:

Meanwhile, I have also found this very interesting post by Mary Anne Erickson: Impressions of India: Udaipur to Deogarh. I’m not yet sure we’re going to follow that route, but it seems promising.

P.S. Despite what the guide said about the airport in Jaisalmer, which is due to open in 2013, according to the reports, it is still not open, so we have to skip that city. Oh well.

EDIT: The guide actually also says, on page 10: Fly from Jaisalmer back to Delhi to connect with your flight home. Fact checking? No, who needs that? :)

February 26, 2016 04:45 PM

February 16, 2016

Pekka Paalanen

A programmer's view on digital images: the essentials

How is an uncompressed raster image laid out in computer memory? How is a pixel represented? What are stride and pitch and what do you need them for? How do you address a pixel in memory? How do you describe an image in memory?

I tried to find a web page for dummies explaining all that, and all I could find was this. So, I decided to write it down myself with the things I see as essential.


An image and a pixel

Wikipedia explains the concept of raster graphics, so let us take that idea as a given. An image, or more precisely, an uncompressed raster image, consists of a rectangular grid of pixels. An image has a width and height measured in pixels, and the total number of pixels in an image is obviously width×height.

A pixel can be addressed with coordinates x,y after you have decided where the origin is and which way the coordinate axes go.

A pixel has a property called color, and it may or may not have opacity (or occupancy). Color is usually described as three numerical values, let us call them "red", "green", and "blue", or R, G, and B. If opacity (or occupancy) exists, it is usually called "alpha" or A. What R, G, B, and A actually mean is irrelevant when looking at how they are stored in memory. The relevant thing is that each of them is encoded with a certain number of bits. Each of R, G, B, and A is called a channel.

When describing how much memory a pixel takes, one can use units of bits or bytes per pixel. Both can be abbreviated as "bpp", so be careful which one it is and favour more explicit names in code. Also bits per channel is used sometimes, and channels can have a different number of bits per pixel each. For example, rgb565 format is 16 bits per pixel, 2 bytes per pixel, 5 bits per R and B channels, and 6 bits per G channel.

A pixel in memory

Pixels do not come in arbitrary sizes. A pixel is usually 32 or 16 bits, or 8 or even 1 bit. 32 and 16 bit quantities are easy and efficient to process on 32 and 64 bit CPUs. Your usual RGB-image with 8 bits per channel is most likely in memory with 32 bit pixels, the extra 8 bits per pixel are simply unused (often marked with X in pixel format names). True 24 bits per pixel formats are rarely used in memory because trading some memory for simpler and more efficient code or circuitry is almost always a net win in image processing. The term "depth" is often used to describe how many significant bits a pixel uses, to distinguish from how many bits or bytes it occupies in memory. The usual RGB-image therefore has 32 bits per pixel and a depth of 24 bits.

How channels are packed in a pixel is specified by the pixel format. There are dozens of pixel formats. When decoding a pixel format, you first have to understand if it is referring to an array of bytes (particularly used when each channel is 8 bits) or bits in a unit. A 32 bits per pixel format has a unit of 32 bits, that is uint32_t in C parlance, for instance.

The difference between an array of bytes and bits in a unit is the CPU architecture endianess. If you have two pixel formats, one written in array of bytes form and one written in bits in a unit form, and they are equivalent on big-endian architecture, then they will not be equivalent on little-endian architecture. And vice versa. This is important to remember when you are mapping one set of pixel formats to another, between OpenGL and anything else, for instance. Figure 1 shows three different pixel format definitions that produce identical binary data in memory.

Figure 1. Three equivalent pixel formats with 8 bits for each channel. The writing convention here is to list channels from highest to lowest bits in a unit. That is, abgr8888 has r in bits 0-7, g in bits 8-15, etc.

It is also possible, though extremely rare, that architecture endianess also affects the order of bits in a byte. Pixman, undoubtedly inheriting it from X11 pixel format definitions, is the only place where I have seen that.

An image in memory

The usual way to store an image in memory is to store its pixels one by one, row by row. The origin of the coordinates is chosen to be the top-left corner, so that the leftmost pixel of the topmost row has coordinates 0,0. First there are all the pixels of the first row, then the second row, and so on, including the last row. A two-dimensional image has been laid out as a one-dimensional array of pixels in memory. This is shown in Figure 2.

Image layout in memory.
Figure 2. The usual layout of pixels of an image in memory.
There are not only the width×height number of pixels, but each row also has some padding. The padding area is not used for storing anything, it only aligns the length of the row. Having padding requires a new concept: image stride.

Padding is often necessary due to hardware reasons. The more specialized and efficient hardware for pixel manipulation, the more likely it is that it has specific requirements on the row start and length alignment. For example, Pixman and therefore also Cairo (image backend particularly) require that rows are aligned to 4 byte boundaries. This makes it easier to write efficient image manipulations using vectorized or other instructions that may even process multiple pixels at the same time.

Stride or pitch

Image width is practically always measured in pixels. Stride on the other hand is related to memory addresses and therefore it is often given in bytes. Pitch is another name for the same concept as stride, but can be in different units.

You may have heard rules of thumb that stride is in bytes and pitch is in pixels, or vice versa. Stride and pitch are used interchangeably, so be sure of the conventions used in the code base you might be working on. Do not trust your instinct on bytes vs. pixels here.

Addressing a pixel

How do you compute the memory address of a given pixel x,y? The canonical formula is:
pixel_address = data_begin + y * stride_bytes + x * bytes_per_pixel.
The formula stars with the address of the first pixel in memory data_begin, then skips to row y while each row is stride_bytes long, and finally skips to pixel x on that row.

In C code, if we have 32 bit pixels, we can write
uint32_t *p = data_begin;
p += y * stride_bytes / sizeof(uint32_t);
p += x;
Notice, how the type of p affects the computations, counting in units of uint32_t instead of bytes.

Let us assume the pixel format in this example is argb8888 which is defined in bits of a unit form, and we want to extract the R value:
uint32_t v = *p;
uint8_t r = (v >> 16) & 0xff;
Finally, Figure 3 gives a cheat sheet.

Figure 3. How to compute the address of a pixel.

Now we have covered the essentials, and you can stop reading. The rest is just good to know.

Not everyone has the "right" way up

In the above we have assumed that the image origin is the top-left corner, and rows are stored top-most first. The most notable exception to this is the OpenGL API, which defines image data to be in bottom-most row first. (Traditionally also BMP file format does this.)

Multi-planar formats

In the above, we have talked about single-planar formats. That means that there is only a single two-dimensional array of pixels forming an image. Multi-planar formats use two or more two-dimensional arrays for forming an image.

A simple example with an RGB-image would be to store R channel in the first plane (2D-array) and GB channels in the second plane. Pixels on the first plane have only R value, while pixels on the second plane have G and B values. However, this example is not used in practice.

Common and real use cases for multi-planar images are various YUV color formats. Y channel is stored on the first plane, and UV channels are stored on the second plane, for instance. A benefit of this is that e.g. the UV plane can be sub-sampled - its resolution could be only half of the plane with Y, saving some memory.

Tiled formats

If you have read about GPUs, you may have heard of tiling or tiled formats (tiled renderer is a different thing). These are special pixel layouts, where an image is not stored row by row but a rectangular block by block. Tiled formats are far too wild and various to explain here, but if you want a taste, take a look at Nouveau's documentation on G80 surface formats.

by pq (noreply@blogger.com) at February 16, 2016 02:23 PM

February 07, 2016

memcpy.io - Robert Foss

ESP8266 APA102 Bulb

Alt text

The product of this project is a WiFi connected LED bulb. Every LED on this bulb is individually programmable over the WiFi, by simply sending UDP packets to the bulb.

Software and hardware sources

git clone https://github.com/robertfoss/esp8266_apa102_bulb.git

This project consists of 3 parts: the software running on the led bulb, the software running on some host computer and the hardware.

Firmware

The firmare is based on the NodeMCU firwmare for the ESP8266. It's running the APA102 LED driver and the enduser setup module, which I've written about previously.

Additionally it's running 3 lua scripts that deal with different aspects.

There's init.lua which makes sure we're connected to a WiFi.

udp_listener.lua receives UDP packets and then sends forwards that data to the APA102 strips.

And lastly udb_broadcast.lua which periodically broadcasts a heartbeat for this LED bulb to signal that it is alive and well.

Host application

The current (as of the publish date of this post) incarnation of the host application listens for bulbs that are alive on the hosts network. If a bulb is found is will be added to the list of bulbs to be animated. All animations are simple and sinusoidal and only use the time a bulb has been 'alive' as an input for the animation.

Hardware

The hardware is based around the ESP8266 WiFi IC and the APA102 SPI LED IC.

The flavor of ESP8266 used in this project is the ESP12-F module, since it the latest module available with the integrated antenna form factor.

APA102 was chosen instead of the much more common WS2812B chip, since it uses a SPI like protocol which isn't timing sensitive and also does not require external capacitors at next to each LED.

v3.1 Schematic

Alt text

v2 3D Model

Alt text

Assembled v2 hardware

Alt text Alt text Alt text

by Robert Foss at February 07, 2016 09:46 PM

February 06, 2016

Andrew Shadura

Community time at Collabora

I haven’t yet blogged about this (as normally I don’t blog often), but I joined Collabora in June last year. Since then, I had an opportunity to work with OpenEmbedded again, write a kernel patch, learn lots of things about systemd (in particular, how to stop worrying about it taking over the world and so on), and do lots of other things.

As one would expect when working for a free software consultancy, our customers do understand the value of the community and contributing back to it, and so does the customer for the project I’m working on. In fact, our customer insists we keep the number of locally applied patches to, for example, Linux kernel, to minimum, submitting as much as possible upstream.

However, apart from the upstreaming work which may be done for the customer, Collabora encourages us, the engineers, to spend up to two hours weekly for upstreaming on top of what customers need, and up to five days yearly as paid Community days. These community days may be spent working on the code or doing volunteering at free software events or even speaking at conferences.

Even though on this project I have already been paid for contributing to the free software project which I maintained in my free time previously (ifupdown), paid community time is a great opportunity to contribute to the projects I’m interested in, and if the projects I’m interested in coincide with the projects I’m working with, I effectively can spend even more time on them.

A bit unfortunately for me, I haven’t spent enough time last year to plan my community days, so I used most of them in the last weeks of the calendar year, and I used them (and some of my upstreaming hours) on something that benefitted both free software community and Collabora. I’m talking about SparkleShare, a cross-platform Git-based file synchronisation solution written in C#. SparkleShare provides an easy to use interface for Git, or, actually, it makes it possible to not use any Git interface at all, as it monitors the working directory using inotify and commits stuff right after it changes. It automatically handles conflicts even for binary files, even though I have to admit its handling could still be improved.

At Collabora, we use SparkleShare to store all sorts of internal documents, and it’s being used by users not familiar with command line interfaces too. Unfortunately, the version we recently had in Debian had a couple of very annoying bugs, making it a great pain to use it: it would not notice edits in local files, or not notice new commits being pushed to the server, and that led to individual users’ edits being lost sometimes. Not cool, especially when the document has to be sent to the customer in a couple of minutes.

The new versions, 1.4 (and recently released 1.5) was reported as being much better and also fixing some crashes, but it also used GTK+ 3 and some libraries not yet packaged for Debian. Thanh Tung Nguyen packaged these packages (and a newer SparkleShare) for Ubuntu and published them in his PPA, but they required some work to be fit for Debian.

I have never touched Mono packages before in my life, so I had to learn a lot. Some time was spent talking to upstream about fixing their copyright statements (they had none in the code, and only one author was mentioned in configure.ac, and nowhere else in the source), a bit more time went into adjusting and updating the patches to the current source code version. Then, of course, waiting the packages to go through NEW. Fixing parallel build issues, waiting for buildds to all build dependencies for at least one architecture… But then, finally, on 19th of January I had the updated SparkleShare in Debian.

As you may have already guessed, this blog post has been sponsored by Collabora, the first of my employers to encourage require me to work on free software in my paid time :)

February 06, 2016 05:22 PM

February 01, 2016

Philip Withnall

DX hackfest 2016 aftermath

The DX hackfest, and FOSDEM, are over. Thanks everyone for coming — and thanks to betacowork, ICAB, the GNOME Foundation, and the various companies who allowed people to come along. Thanks to Collabora for sending me along and sponsoring snacks and dinner one evening.

What did we do?

by Philip Withnall at February 01, 2016 03:42 PM

January 30, 2016

Simon McVittie

GNOME Developer Experience hackfest: xdg-app + Debian

Over the last few days I've been at the GNOME Developer Experience hackfest in Brussels, looking into xdg-app and how best to use it in Debian and Debian derivatives.

xdg-app is basically a way to run "non-core" software on Linux distributions, analogous to apps on Android and iOS. It doesn't replace distributions like Debian or packaging systems, but it adds a layer above them. It's mostly aimed towards third-party apps obtained from somewhere that isn't your distribution vendor, aiming to address a few long-standing problems in that space:

  • There's no single ABI that can be called "a standard Linux system" in the same way there would be for Windows or OS X or Android or whatever, apart from LSB which is rather limited. Testing that a third-party app "works on Linux", or even "works on stable Linux releases from 2015", involves a combinatorial explosion of different distributions, desktop environments and local configurations. Steam uses the Steam Runtime, a chroot environment closely resembling Ubuntu 12.04 LTS; other vendors tend to test on a vaguely recent Ubuntu LTS and leave it at that.

  • There's no widely-supported mechanism for installing third-party applications as an ordinary user. gog.com used to distribute Ubuntu- and Debian-compatible .deb files, but installing a .deb involves running arbitrary vendor-supplied scripts as root, which should worry anyone who wants any sort of privilege-separation. (They have now switched to executable self-extracting installers, which involve running arbitrary vendor-supplied scripts as an ordinary user... better, but not perfect.)

  • Relatedly, the third-party application itself runs with the user's full privileges: a malicious or security-buggy third-party application can do more or less anything, unless you either switch to a different uid to run third-party apps, or use a carefully-written, app-specific AppArmor profile or equivalent.

To address the first point, each application uses a specified "runtime", which is available as /usr inside its sandbox. This can be used to run application bundles with multiple, potentially incompatible sets of dependencies within the same desktop environment. A runtime can be updated within its branch - for instance, if an application uses the "GNOME 3.18" runtime (consisting of a basic Linux system, the GNOME 3.18 libraries, other related libraries like Mesa, and their recursive dependencies like libjpeg), it can expect to see minor-version updates from GNOME 3.18.x (including any security updates that might be necessary for the bundled libraries), but not a jump to GNOME 3.20.

To address the second issue, the plan is for application bundles to be available as a single file, containing metadata (such as the runtime to use), the app itself, and any dependencies that are not available in the runtime (which the app vendor is responsible for updating if necessary). However, the primary way to distribute and upgrade runtimes and applications is to package them as OSTree repositories, which provide a git-like content-addressed filesystem, with efficient updates using binary deltas. The resulting files are hard-linked into place.

To address the last point, application bundles run partially isolated from the wider system, using containerization techniques such as namespaces to prevent direct access to system resources. Resources from outside the sandbox can be accessed via "portal" services, which are responsible for access control; for example, the Documents portal (the only one, so far) displays an "Open" dialog outside the sandbox, then allows the application to access only the selected file.

xdg-app for Debian

One thing I've been doing at this hackfest is improving the existing Debian/Ubuntu packaging for xdg-app (and its dependencies ostree and libgsystem), aiming to get it into a state where I can upload it to Debian experimental. Because xdg-app aims to be a general freedesktop project, I'm currently intending to make it part of the "Utopia" packaging team alongside projects like D-Bus and polkit, but I'm open to suggestions if people want to co-maintain it elsewhere.

In the process of updating xdg-app, I sent various patches to Alex, mostly fixing build and test issues, which are in the new 0.4.8 release.

I'd appreciate co-maintainers and further testing for this stuff, particularly ostree: ostree is primarily a whole-OS deployment technology, which isn't a use-case that I've tested, and in particular ostree-grub2 probably doesn't work yet.

Source code:

Binaries (no trust path, so only use these if you have a test VM):

  • deb https://people.debian.org/~smcv/xdg-app xdg-app main

The "Hello, World" of xdg-apps

Another thing I set out to do here was to make a runtime and an app out of Debian packages. Most of the test applications in and around GNOME use the "freedesktop" or "GNOME" runtimes, which consist of a Yocto base system and lots of RPMs, are rebuilt from first principles on-demand, and are extensive and capable enough that they make it somewhat non-obvious what's in an app or a runtime.

So, here's a step-by-step route through xdg-app, first using typical GNOME instructions, but then using the simplest GUI app I could find - xvt, a small xterm clone. I'm using a Debian testing (stretch) x86_64 virtual machine for all this. xdg-app currently requires systemd-logind to put users and apps in cgroups, either with systemd as pid 1 (systemd-sysv) or systemd-shim and cgmanager; I used the default systemd-sysv. In principle it could work with plain cgmanager, but nobody has contributed that support yet.

Demonstrating an existing xdg-app

Debian's kernel is currently patched to be able to allow unprivileged users to create user namespaces, but make it runtime-configurable, because there have been various security issues in that feature, making it a security risk for a typical machine (and particularly a server). Hopefully unprivileged user namespaces will soon be secure enough that we can enable them by default, but for now, we have to do one of three things to let xdg-app use them:

  • enable unprivileged user namespaces via sysctl:

    sudo sysctl kernel.unprivileged_userns_clone=1
    
  • make xdg-app root-privileged (it will keep CAP_SYS_ADMIN and drop the rest):

    sudo dpkg-statoverride --update --add root root 04755 /usr/bin/xdg-app-helper
    
  • make xdg-app slightly less privileged:

    sudo setcap cap_sys_admin+ep /usr/bin/xdg-app-helper
    

First, we'll need a runtime. The standard xdg-app tutorial would tell you to download the "GNOME Platform" version 3.18. To do that, you'd add a remote, which is a bit like a git remote, and a bit like an apt repository:

$ wget http://sdk.gnome.org/keys/gnome-sdk.gpg
$ xdg-app remote-add --user --gpg-import=gnome-sdk.gpg gnome \
    http://sdk.gnome.org/repo/

(I'm ignoring considerations like trust paths and security here, for brevity; in real life, you'd want to obtain the signing key via https and/or have a trust path to it, just like you would for a secure-apt signing key.)

You can list what's available in a remote:

$ xdg-app remote-ls --user gnome
...
org.freedesktop.Platform
...
org.freedesktop.Platform.Locale.cy
...
org.freedesktop.Sdk
...
org.gnome.Platform
...

The Platform runtimes are what we want here: they are collections of runtime libraries with which you can run an application. The Sdk runtimes add development tools, header files, etc. to be able to compile apps that will be compatible with the Platform.

For now, all we want is the GNOME 3.18 platform:

$ xdg-app install --user gnome org.gnome.Platform 3.18

Next, we can install an app that uses it, from Alex Larsson's nightly builds of a subset of GNOME. The server they're on doesn't have a great deal of bandwidth, so be nice :-)

$ wget http://209.132.179.2/keys/nightly.gpg
$ xdg-app remote-add --user --gpg-import=nightly.gpg nightly \
    http://209.132.179.2/repo/
$ xdg-app install --user nightly org.mypaint.MypaintDevel

We now have one app, and the runtime it needs:

$ xdg-app list
org.mypaint.MypaintDevel
$ xdg-app run org.mypaint.MypaintDevel
[you see a GUI window]

Digression: what's in a runtime?

Behind the scenes, xdg-app runtimes and apps are both OSTree trees. This means the ostree tool, from the package of the same name, can be used to inspect them.

$ sudo apt install ostree
$ ostree refs --repo ~/.local/share/xdg-app/repo
gnome:runtime/org.gnome.Platform/x86_64/3.18
nightly:app/org.mypaint.MypaintDevel/x86_64/master

A "ref" has roughly the same meaning as in git: something like a branch or a tag. ostree can list the directory tree that it represents:

$ ostree ls --repo ~/.local/share/xdg-app/repo \
    runtime/org.gnome.Platform/x86_64/3.18
d00755 0 0      0 /
-00644 0 0    493 /metadata
d00755 0 0      0 /files
$ ostree ls --repo ~/.local/share/xdg-app/repo \
    runtime/org.gnome.Platform/x86_64/3.18 /files
d00755 0 0      0 /files
l00777 0 0      0 /files/local -> ../var/usrlocal
l00777 0 0      0 /files/sbin -> bin
d00755 0 0      0 /files/bin
d00755 0 0      0 /files/cache
d00755 0 0      0 /files/etc
d00755 0 0      0 /files/games
d00755 0 0      0 /files/include
d00755 0 0      0 /files/lib
d00755 0 0      0 /files/lib64
d00755 0 0      0 /files/libexec
d00755 0 0      0 /files/share
d00755 0 0      0 /files/src

You can see that /files in a runtime is basically a copy of /usr. This is not coincidental: the runtime's /files gets mounted at /usr inside the xdg-app container. There is also some metadata, which is in the ini-like syntax seen in .desktop files:

$ ostree cat --repo ~/.local/share/xdg-app/repo \
    runtime/org.gnome.Platform/x86_64/3.18 /metadata
[Runtime]
name=org.gnome.Platform/x86_64/3.16
runtime=org.gnome.Platform/x86_64/3.16
sdk=org.gnome.Sdk/x86_64/3.16

[Extension org.freedesktop.Platform.GL]
version=1.2
directory=lib/GL

[Extension org.freedesktop.Platform.Timezones]
version=1.2
directory=share/zoneinfo

[Extension org.gnome.Platform.Locale]
directory=share/runtime/locale
subdirectories=true

[Environment]
GI_TYPELIB_PATH=/app/lib/girepository-1.0
GST_PLUGIN_PATH=/app/lib/gstreamer-1.0
LD_LIBRARY_PATH=/app/lib:/usr/lib/GL

Looking at an app, the situation is fairly similar:

$ ostree ls --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master
d00755 0 0      0 /
-00644 0 0    258 /metadata
d00755 0 0      0 /export
d00755 0 0      0 /files

This time, /files maps to what will become /app for the application, which was compiled with --prefix=/app:

$ ostree ls --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master /files
d00755 0 0      0 /files
-00644 0 0   4599 /files/manifest.json
d00755 0 0      0 /files/bin
d00755 0 0      0 /files/lib
d00755 0 0      0 /files/share

There is also a /export directory, which is made visible to the host system so that the contained app can appear as a "first-class citizen" in menus:

$ ostree ls --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master /export
d00755 0 0      0 /export
d00755 0 0      0 /export/share
user@debian:~$ ostree ls --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master /export/share
d00755 0 0      0 /export/share
d00755 0 0      0 /export/share/app-info
d00755 0 0      0 /export/share/applications
d00755 0 0      0 /export/share/icons
user@debian:~$ ostree ls --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master /export/share/applications
d00755 0 0      0 /export/share/applications
-00644 0 0    715 /export/share/applications/org.mypaint.MypaintDevel.desktop
$ ostree cat --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master \
    /export/share/applications/org.mypaint.MypaintDevel.desktop
[Desktop Entry]
Version=1.0
Name=(Nightly) MyPaint
TryExec=mypaint
Exec=mypaint %f
Comment=Painting program for digital artists
...
Comment[zh_HK]=藝術家的电脑绘画
GenericName=Raster Graphics Editor
GenericName[fr]=Éditeur d'Image Matricielle
MimeType=image/openraster;image/png;image/jpeg;
Type=Application
Icon=org.mypaint.MypaintDevel
StartupNotify=true
Categories=Graphics;GTK;2DGraphics;RasterGraphics;
Terminal=false

Again, there's some metadata:

$ ostree cat --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master /metadata
[Application]
name=org.mypaint.MypaintDevel
runtime=org.gnome.Platform/x86_64/3.18
sdk=org.gnome.Sdk/x86_64/3.18
command=mypaint

[Context]
shared=ipc;
sockets=x11;pulseaudio;
filesystems=host;

[Extension org.mypaint.MypaintDevel.Debug]
directory=lib/debug

Building a runtime, probably the wrong way

The way in which the reference/demo runtimes and containers are generated is... involved. As far as I can tell, there's a base OS built using Yocto, and the actual GNOME bits come from RPMs. However, we don't need to go that far to get a working runtime.

In preparing this runtime I'm probably completely ignoring some best-practices and tools - but it works, so it's good enough.

First we'll need a repository:

$ sudo install -d -o$(id -nu) /srv/xdg-apps
$ ostree init --repo /srv/xdg-apps

I'm just keeping this local for this demonstration, but you could rsync it to a web server's exported directory or something - a lot like a git repository, it's just a collection of files. We want everything in /usr because that's what xdg-app expects, hence usrmerge:

$ sudo mount -t tmpfs -o mode=0755 tmpfs /mnt
$ sudo debootstrap --arch=amd64 --include=libx11-6,usrmerge \
    --variant=minbase stretch /mnt http://192.168.122.1:3142/debian
$ sudo mkdir /mnt/runtime
$ sudo mv /mnt/usr /mnt/runtime/files

This obviously has a lot of stuff in it that we don't need - most obviously init, apt and dpkg - but it's Good Enough™.

We will also need some metadata. This is sufficient:

$ sudo sh -c 'cat > /mnt/runtime/metadata'
[Runtime]
name=org.debian.Debootstrap/x86_64/8.20160130
runtime=org.debian.Debootstrap/x86_64/8.20160130

That's a runtime. We can commit it to ostree, and generate xdg-app metadata:

$ ostree commit --repo /srv/xdg-apps \
    --branch runtime/org.debian.Debootstrap/x86_64/8.20160130 \
    /mnt/runtime
$ fakeroot ostree commit --repo /srv/xdg-apps \
    --branch runtime/org.debian.Debootstrap/x86_64/8.20160130
$ fakeroot xdg-app build-update-repo /srv/xdg-apps

(I'm not sure why ostree and xdg-app report "Operation not permitted" when we aren't root or fakeroot - feedback welcome.)

build-update-repo would presumably also be the right place to GPG-sign your repository, if you were doing that.

We can add that as another xdg-app remote:

$ xdg-app remote-add --user --no-gpg-verify local file:///srv/xdg-apps
$ xdg-app remote-ls --user local
org.debian.Debootstrap

Building an app, probably the wrong way

The right way to build an app is to build a "SDK" runtime - similar to that platform runtime, but with development files and tools - and recompile the app and any missing libraries with ./configure --prefix=/app && make && make install. I'm not going to do that, because simplicity is nice, and I'm reasonably sure xvt doesn't actually hard-code /usr into the binary:

$ install -d xvt-app/files/bin
$ sudo apt-get --download-only install xvt
$ dpkg-deb --fsys-tarfile /var/cache/apt/archives/xvt_2.1-20.1_amd64.deb \
    | tar -xvf - ./usr/bin/xvt
./usr/
./usr/bin/
./usr/bin/xvt
...
$ mv usr xvt-app/files

Again, we'll need metadata, and it's much simpler than the more production-quality GNOME nightly builds:

$ cat > xvt-app/metadata
[Application]
name=org.debian.packages.xvt
runtime=org.debian.Debootstrap/x86_64/8.20160130
command=xvt

[Context]
sockets=x11;
$ fakeroot ostree commit --repo /srv/xdg-apps \
    --branch app/org.debian.packages.xvt/x86_64/2.1-20.1 xvt-app
$ fakeroot xdg-app build-update-repo /srv/xdg-apps
Updating appstream branch
No appstream data for runtime/org.debian.Debootstrap/x86_64/8.20160130
No appstream data for app/org.debian.packages.xvt/x86_64/2.1-20.1
Updating summary
$ xdg-app remote-ls --user local
org.debian.Debootstrap
org.debian.packages.xvt

The obligatory screenshot

OK, good, now we can install it:

$ xdg-app install --user local org.debian.Debootstrap 8.20160130
$ xdg-app install --user local org.debian.packages.xvt 2.1-20.1
$ xdg-app run --branch=2.1-20.1 org.debian.packages.xvt

xvt in a container

and you can play around with the shell in the xvt and see what you can and can't do in the container.

I'm sure there were better ways to do most of this, but I think there's value in having such a simplistic demo to go alongside the various GNOMEish apps.


Acknowledgements:

Thanks to all those!

January 30, 2016 06:07 PM

Neil McGovern

On ZFS in Debian

shutterstock_366995438I’m currently over at FOSDEM, and have been asked by a couple of people about the state of ZFS and Debian. So, I thought I’d give a quick post to explain what Debian’s current plan is (which has come together with a lot of discussion with the FTP Masters and others around what we should do).

TLDR: It’s going in contrib, as a source only dkms module.

Longer version:

Debian has always prided itself in providing the unequivocally correct solution to our users and downstream distributions. This also includes licenses – we make sure that Debian will contain 100% free software. This means that if you install Debian, you are guaranteed freedoms offered under the DFSG and our social contract.

Now, this is where ZFS on Linux gets tricky. ZFS is licensed under the CDDL, and the Linux kernel under the GPLv2-only. The project views that both of these are free software licenses, but they’re incompatible with each other. This incompatibility means that there is risk to producing a combined work with Linux and a CDDL module. (Note: there is arguments about if a kernel module, once loaded, is a combined work with the kernel. I’m not touching that with a barge pole, as I Am Not A Lawyer.)

Now, does this mean that Debian would get sued by distributing ZFS natively compiled into the kernel? Well, maybe, but I think it’s a bit unlikely. This doesn’t mean it’s the right choice for Debian to take as a project though! It brings us back to our promise to our users, and our commercial and non-commercial downstream distributions. If a commercial downstream distribution took the next release of stable, and used our binaries, they may well get sued if they have enough money to make it worthwhile. Additionally, Debian has always taken its commitment to upstream licenses very seriously. If there’s a doubt, it doesn’t go in official Debian.

It should be noted that ZFS is something that is important to a lot of Debian users, who all want to be able to use ZFS in a manner that makes it easier for them to install. Thus, the position that we’ve arrived at is that we can ship ZFS as a source only, DKMS module. This means it will be built on the target machines, and we’re not distributing binaries. There’s also a warning in the README.Debian file explaining that care should be taken if you do things with the resultant binary – as we can’t promise it complies with the licenses.

Finally, I should point out that this isn’t my decision in the end. The contents of the archive is a decision for the FTP-Masters, as it’s delegated. However, what I have been able to do is coordinate many conflicting views, and I hope that ZFS will be accepted into the archive soon!

by Neil McGovern at January 30, 2016 04:35 PM

January 29, 2016

Philip Withnall

Instrumenting the GLib main loop with Dunfell

tl;dr: Visualise your main context and sources using Dunfell. Feedback and ideas welcome.

At the DX hackfest, I’ve been working on a new tool for instrumenting and visualising the behaviour of the GLib main context (or main contexts) in your program.

Screenshot from 2016-01-29 11-17-35

It’s called Dunfell (because I’m a sucker for hills) and at a high level it works by using SystemTap to record various GMainContext interactions in your program, saving them to a log file. The log file can then be examined using a viewer program.

The source is available on GitLab or GitHub because I still haven’t decided which is better.

In the screenshot above, each vertical line is a thread, each blue box is one dispatch phase of the main context which is currently running on that thread, each orange blob is a new GSource being created, and the green blob is a GSource which has been selected for closer inspection.

At the moment, it requires a couple of GLib patches to add some more SystemTap probe points, and it also requires a recent version of GTK+. It needs SystemTap, and I’ve only tested it on Fedora, so it might need some patching to work with the SystemTap installed on other distributions.

Screenshot from 2016-01-29 11-57-39

This screenshot is of a trace of the buffered-input-stream test from GIO, showing I/O callbacks being made across threads as idle source callbacks.

More visualisation ideas are welcome! At the moment, what Dunfell draws is quite simplistic. I hope it will be able to solve various common debugging problems eventually but suggestions for ways to do this intuitively, or for other problems to visualise, are welcome. Here are the use cases I was initially thinking about (from the README):

  • Detect GSources which are never added to a GMainContext.
  • Detect GSources which are dispatched too often (i.e. every main context iteration).
  • Detect GSources whose dispatch function takes too long (and hence blocks the main context).
  • Detect GSources which are never removed from their GMainContext after being dispatched (but which are never dispatched again).
  • Detect GMainContexts which have GSources attached or (especially) events pending, but which aren’t being iterated.
  • Monitor the load on each GMainContext, such as how many GSources it has attached, and how many events are processed each iteration.
  • Monitor ongoing asynchronous calls and GTasks, giving insight into their nesting and dependencies.
  • Monitor unfinished or stalled asynchronous calls.
  • Allow users to record logs to send to the developers for debugging on a different machine. The users may have to install additional software to record these logs (some component of Dunfell, plus its dependencies), but should not have to recompile or otherwise modify the program being debugged.
  • Work with programs which purely use GLib, through to programs which use GLib, GIO and GTK+.
  • Allow visualisation of this data, both in a standalone program, and in an IDE such as GNOME Builder.
  • Allow visualising differences between two traces.
  • Minimise runtime overhead of logging a program, to reduce the risk of disturbing race conditions by enabling logging.
  • Connecting to an already-running program is not a requirement, since by the time you’ve decided there’s a problem with a program, it’s already in the wrong state.

by Philip Withnall at January 29, 2016 11:14 AM