Planet Collabora

October 15, 2018

Alexandros Frantzis

On the low adoption of automated testing in FOSS

A few times in the recent past I've been in the unfortunate position of using a prominent Free and Open Source Software (FOSS) program or library, and running into issues of such fundamental nature that made me wonder how those issues even made it into a release.

In all cases, the answer came quickly when I realized that, invariably, the project involved either didn't have a test suite, or, if it did have one, it was not adequately comprehensive.

I am using the term comprehensive in a very practical, non extreme way. I understand that it's often not feasible to test every possible scenario and interaction, but, at the very least, a decent test suite should ensure that under typical circumstances the code delivers all the functionality it promises to.

For projects of any value and significance, having such a comprehensive automated test suite is nowadays considered a standard software engineering practice. Why, then, don't we see more prominent FOSS projects employing this practice, or, when they do, why is it often employed poorly?

In this post I will highlight some of the reasons that I believe play a role in the low adoption of proper automated testing in FOSS projects, and argue why these reasons may be misguided. I will focus on topics that are especially relevant from a FOSS perspective, omitting considerations, which, although important, are not particular to FOSS.

My hope is that by shedding some light on this topic, more FOSS projects will consider employing an automated test suite.

As you can imagine, I am a strong proponent of automating testing, but this doesn't mean I consider it a silver bullet. I do believe, however, that it is an indispensable tool in the software engineering toolbox, which should only be forsaken after careful consideration.

1. Underestimating the cost of bugs

Most FOSS projects, at least those not supported by some commercial entity, don't come with any warranty; it's even stated in the various licenses! The lack of any formal obligations makes it relatively inexpensive, both in terms of time and money, to have the occasional bug in the codebase. This means that there are fewer incentives for the developer to spend extra resources to try to safeguard against bugs. When bugs come up, the developers can decide at their own leisure if and when to fix them and when to release the fixed version. Easy!

At first sight, this may seem like a reasonably pragmatic attitude to have. After all, if fixing bugs is so cheap, is it worth spending extra resources trying to prevent them?

Unfortunately, bugs are only cheap for the developer, not for the users who may depend on the project for important tasks. Users expect the code to work properly and can get frustrated or disappointed if this is not the case, regardless of whether there is any formal warranty. This is even more pronounced when security concerns are involved, for which the cost to users can be devastating.

Of course, lack of formal obligations doesn't mean that there is no driver for quality in FOSS projects. On the contrary, there is an exceptionally strong driver: professional pride. In FOSS projects the developers are in the spotlight and no (decent) developer wants to be associated with a low quality, bug infested codebase. It's just that, due to the mentality stated above, in many FOSS projects the trade-offs developers make seem to favor a reactive rather than proactive attitude.

2. Overtrusting code reviews

One of the development practices FOSS projects employ ardently is code reviews. Code reviews happen naturally in FOSS projects, even in small ones, since most contributors don't have commit access to the code repository and the original author has to approve any contributions. In larger projects there are often more structured procedures which involve sending patches to a mailing list or to a dedicated reviewing platform. Unfortunately, in some projects the trust on code reviews is so great, that other practices, like automated testing, are forsaken.

There is no question that code reviews are one of the best ways to maintain and improve the quality of a codebase. They can help ensure that code is designed properly, it is aligned with the overall architecture and furthers the long term goals of the project. They also help catch bugs, but only some of them, some of the time!

The main problem with code reviews is that we, the reviewers, are only human. We humans are great at creative thought, but we are also great at overlooking things, occasionally filling in the gaps with our own unicorns-and-rainbows inspired reality. Another reason is that we tend to focus more on the code changes at a local level, and less on how the code changes affect the system as a whole. This is not an inherent problem with the process itself but rather a limitation of humans performing the process. When a codebase gets large enough, it's difficult for our brains to keep all the possible states and code paths in mind and check them mentally, even in a codebase that is properly designed.

In theory, the problem of human limitations is offset by the open nature of the code. We even have the so called Linus's law which states that "given enough eyeballs, all bugs are shallow". Note the clever use of the indeterminate term "enough". How many are enough? How about the qualitative aspects of the "eyeballs"?

The reality is that most contributions to big, successful FOSS projects are reviewed on average by a couple of people. Some projects are better, most are worse, but in no case does being FOSS magically lead to a large number of reviewers tirelessly checking code contributions. This limit in the number of reviewers also limits the extent to which code reviews can stand as the only process to ensure quality.

3. It's not in the culture

In order to try out a development process in a project, developers first need to learn about it and be convinced that it will be beneficial. Although there are many resources, like books and articles, arguing in favor of automated tests, the main driver for trying new processes is still learning about them from more experienced developers when working on a project. In the FOSS world this also takes the form of studying what other projects, especially the high-profile ones, are doing.

Since comprehensive automated testing is not the norm in FOSS, this creates a negative network effect. Why should you bother doing automated tests if the high profile projects, which you consider to be role models, don't do it properly or at all?

Thankfully, the culture is beginning to shift, especially in projects using technologies in which automated testing is part of the culture of the technologies themselves. Unfortunately, many of the system-level and middleware FOSS projects are still living in the non automated test world.

4. Tests as an afterthought

Tests as an afterthought is not a situation particular to FOSS projects, but it is especially relevant to them since the way they spring and grow can disincentivize the early writing of tests.

Some FOSS projects start as small projects to scratch an itch, without any plans for significant growth or adoption, so the incentives to have tests at this stage are limited.

In addition, many projects, even the ones that start with more lofty adoption goals, follow a "release early, release often" mentality. This mentality has some benefits, but at the early stages also carries the risk of placing the focus exclusively on making the project as relevant to the public as possible, as quickly as possible. From such a perspective, spending the probably limited resources on tests instead of features seems like a bad use of developer time.

As the project grows and becomes more complex, however, more and more opportunities for bugs arise. At this point, some projects realize that adding a test suite would be beneficial for maintaining quality in the long term. Unfortunately, for many projects, it's already too late. The code by now has become test-unfriendly and significant effort is needed to change it.

The final effect is that many projects remain without an automated test suite, or, in the best case, with a poor one.

5. Missing CI infrastructure

Automated testing delivers the most value if it is combined with a CI service that runs the tests automatically for each commit or merge proposal. Until recently, access to such services was difficult to get for a reasonably low effort and cost. Developers either had to set up and host CI themselves, or pay for a commercial service, thus requiring resources which unsponsored FOSS projects were unlikely to be able to afford.

Nowadays, it's far easier to find and use free CI services, with most major code hosting platforms supporting them. Hopefully, with time, this reason will completely cease being a factor in the lack of automated testing adoption.

6. Not the hacker way

The FOSS movement originated from the hacker culture and still has strong ties to it. In the minds of some, the processes around software testing are too enterprise-y, too 9-to-5, perceived as completely contrary to the creative and playful nature of hacking.

My argument against this line of thought is that the hacker values technical excellence very highly, and, automated testing, as a tool that helps achieve such excellence, can not be inconsistent with the hacker way.

Some pseudo-hackers may also argue that their skills are so refined that their code doesn't require testing. When we are talking about a codebase of any significant size, I consider this attitude to be a sign of inexperience and immaturity rather than a testament of superior skills.

Epilogue

I hope this post will serve as a good starting point for a discussion about the reasons which discourage FOSS projects from adopting a comprehensive automated test suite. Identifying both valid concerns and misconceptions is the first step in convincing both fledging and mature FOSS projects to embrace automated testing, which will hopefully lead to an improvement in the overall quality of FOSS.

by Alexandros Frantzis at October 15, 2018 06:00 PM

September 28, 2018

Lucas Kanashiro

MicroDebConf Brasília 2018

After I came back to my home city (Brasília) I felt the necessity to promote and help people to contribute to Debian, some old friends from my former university (Univesrity of Brasília) and the local comunnity (Debian Brasília) came up with the idea to run a Debian related event and I just thought: “That sounds amazing!”. We contacted the university to book a small auditorium there for an entire day. After that we started to think, how should we name the event? The Debian Day was more or less one month ago, someone speculated a MiniDebConf but I thought that it was going to be much smaller than regular MiniDebConfs. So we decided to use a term that we used sometime ago here in Brasília, we called MicroDebConf :)

MicroDebConf Brasília 2018 took place at Gama campus of University of Brasília on September 8th. It was amazing, we gathered a lot of students from university and some high schools, and some free software enthisiastics too. We had 44 attendees in total, we did not expect all these people in the begining! During the day we presented to them what is Debian Project and the many different ways to contribute to it.

Since our focus was newcommers we started from the begining explaining how to use Debian properly, how to interact with the community and how to contribute. We also introduced them to some other subjects such as management of PGP keys, network setup with Debian and some topics about Linux kernel contributions. As you probably know, students are never satisfied, sometimes the talks are too easy and basic and other times are too hard and complex to follow. Then we decided to balance the talks level, we started from Debian basics and went over details of Linux kernel implementation. Their feedback was positive, so I think that we should do it again, atract students is always a challenge.

In the end of the day we had some discussions regarding what should we do to grow our local community? We want more local people actually contributing to free software projects and specially Debian. A lot of people were interested but some of them said that they need some guidance, the life of a newcommer is not so easy for now.

After some discussion we came up with the idea of a study group about Debian packaging, we will schedule meetings every week (or two weeks, not decided yet), and during these meetings we will present about packaging (good practices, tooling and anything that people need) and do some hands-on work. My intention is document everything that we will do to facilitate the life of future newcommers that wants to do Debian packaging. My main reference for this study groups has been LKCamp, they are a more consolidated group and their focus is to help people start contributing to Linux kernel.

In my opinion, this kind of initiative could help us on bring new blood to the project and disseminate the free software ideas/culture. Other idea that we have is to promote Debian and free software in general to non technical people. We realized that we need to reach these people if we want a broader community, we do not know how exactly yet but it is in our radar.

After all these talks and discussions we needed some time to relax, and we did that together! We went to a bar and got some beer (except people with less than 18 years old :) and food. Of course that ours discussions about free software kept running all night long.

The following is an overview about this conference:

  • We probably defined this term and are the first organizing a MicroDebConf (we already did it in 2015). We should promote more this kind of local events

  • I guess we inspired a lot of young people to contribute to Debian (and free software in general)

  • We defined a way to help local people starting contributing to Debian with packaging. I really like this idea of a study group, meet people in person is always the best way to create bonds

  • Now we hopefully will have a stronger Debian community in Brasília - Brazil \o/

Last but not least, I would like to thank LAPPIS (a research lab which I was part in my undergrad), they helped us with all the logistics and buroucracies, and Collabora for the coffee break sponsorship! Collabora, LAPPIS and us share the same goal: promote FLOSS to all these young people and make our commuity grow!

September 28, 2018 09:20 PM

September 19, 2018

Alexandros Frantzis

Bless Hex Editor 0.6.1

A long time ago, on a computer far, far away... well, actually, 14 years ago, on a computer that is still around somewhere in the basement, I wrote the first lines of source code for what would become the Bless hex editor.

For my initial experiments I used C++ with the gtkmm bindings, but C++ compilation times were so appallingly slow on my feeble computer, that I decided to give the relatively young Mono framework a try. The development experience was much better, so I continued with Mono and Gtk#. For revision control, I started out with tla (remember that?), but eventually settled on bzr.

Development continued at a steady pace until 2009, when life's responsibilities got in the way, and left me with little time to work on the project. A few attempts were made by other people to revive Bless after that, but, unfortunately, they also seem to have stagnated. The project had been inactive for almost 8 years when the gna.org hosting site closed down in 2017 and pulled the official Bless page and bzr repository with it into the abyss.

Despite the lack of development and maintenance, Bless remained surprisingly functional through the years. I, and many others it seems, have kept using it, and, naturally, a few bugs have been uncovered during this time.

I recently found some time to bring the project back to life, although, I should warn, this does not imply any intention to resume feature development on it. My free time is still scarce, so the best I can do is try to maintain it and accept contributions. The project's new official home is at https://github.com/afrantzis/bless.

To mark the start of this new era, I have released Bless 0.6.1, containing fixes for many of the major issues I could find reports for. Enjoy!

Important Note: There seems to be a bug in some versions of Mono that manifests as a crash when selecting bytes. The backtrace looks like:

free(): invalid pointer
Stacktrace:

  at <unknown> <0xffffffff>
  at (wrapper managed-to-native) GLib.SList.g_free (intptr) <0x0005f>
  at GLib.ListBase.Empty () <0x0013c>
  at GLib.ListBase.Dispose (bool) <0x0000f>
  at GLib.ListBase.Finalize () <0x0001d>
  at (wrapper runtime-invoke) object.runtime_invoke_virtual_void__this__ (object,intptr,intptr,intptr) <0x00068>

Searching for this backtrace you can find various reports of other Mono programs also affected by this bug. At the time of writing, the mono packages in Debian and Ubuntu (4.6.2) exhibit this problem. If you are affected, the solution is to update to a newer version of Mono, e.g., from https://www.mono-project.com/download/stable/.

by Alexandros Frantzis at September 19, 2018 12:00 AM

September 14, 2018

Alexandros Frantzis

git-c2b: An alternative workflow for Chromium's Gerrit

There are two main options to handle reviews in git. The first option is to treat commits as the unit of review. In this commit-based flow, authors work on a branch with multiple commits and submit them for review, either by pushing the branch or by creating a patch series for these commits. Typically, each commit is expected to be functional and to be reviewable independently.

Here is a feature branch in a commit-based flow, before and after changing D to D' with an interactive rebase (E and F are also changed by the rebase, to E' and F'):

A-B-C       [master]       A-B-C          [master] 
     \                          \                  
      D-E-F [feature]            D'-E'-F' [feature] or [feature-v2]

The second option is to treat branches as the unit of review. In this branch-based flow, authors work on multiple dependent branches and submit them for review by pushing them to the review system. The individual commits in each branch don't matter; only the final state of each branch is taken into account. Some review systems call this the "squash" mode.

Here are some dependent branches for a feature in a branch-based flow, before and after updating feature-1 by adding D', and then updating the other branches by merging (we could rebase, instead, if we don't care about retaining history):

A-B-C       [master]       A-B-C           [master]
     \                          \
      D     [feature-1]          D--D'     [feature-1]
       \                          \  \
        E   [feature-2]            E--E'   [feature-2]
         \                          \  \
          F [feature-3]              F--F' [feature-3]

Some people prefer to work this way, so they can update their submission without losing the history of each individual change (e.g., keep both D and D'). This reason is unconvincing, however, since one can easily preserve history in a commit-based flow, too, by checking out a different branch (e.g., 'feature-v2') to work on.

Personally, I find branch-based flows a pain to work with. Their main fault is the distracting and annoying user experience when dealing with multiple dependent changes. Setting up and maintaining the dependent branches during updates is far from straightforward. What would normally be a simple 'git rebase -i', now turns into a fight to create and maintain separate dependent branches. There are tools that can help (git-rebase-update), but they are no match for the simplicity and efficiency of rebasing interactively in a single branch.

Chromium previously used the Rietveld review system, which uses branches as its unit of review. Recently Chromium switched to Gerrit, but, instead of sticking with Gerrit's native commit-based flow, it adapted its tools to provide a branch-based flow similar to Rietveld's. Interacting with Chromium's review system is done mainly through the git-cl tool which evolved over the years to support both flows. At this point, however, the commit-based flow is essentially unsupported and broken for many uses cases. Here is what working on Chromium typically looks like:

# Create and work on first branch
$ git checkout -b feature -t origin/master
$ git commit -m 'Feature'
...
$ git commit -m 'Update to feature'
...
# Create and work on second (dependent) branch
$ git checkout -b feature-next -t feature
$ git commit -m 'Feature next'
...
$ git commit -m 'Update to feature next'
...
# Upload the changes for review
$ git checkout feature
$ git cl upload --dependencies

I wrote the git-c2b (commits-to-branches) tool to be able to maintain a commit-based git flow even when working with branch-based review systems, such as Chromium's Gerrit. The idea, and the tool itself, is simple but effective. It allows me to work as usual in a single branch, splitting changes into commits and amending them as I like. Just before submitting, I run git-c2b to produce separate dependent branches for each commit. If the branches already exist they are updated without losing any upstream metadata.

This is my current workflow with Chromium and git-c2b:

# Create patchset in branch
$ git checkout -b feature -t origin/master
$ git commit -m 'Change 1'
...
$ git commit -m 'Change 2'
...
# Use git-c2b to create branches feature-1, feature-2, ... for each commit
$ git c2b
# Upload the changes for review
$ git checkout feature-1
$ git cl upload --dependencies

To update the patches and dependent CLs:

$ git checkout feature
$ git rebase -i origin/master
...
# Use c2b to update the feature-1, feature-2, ... branches
$ git c2b
# Upload the changes for review
$ git checkout feature-1
$ git cl upload --dependencies

When changes start to get merged, I typically need to reupload only the commits that are left. For example, if the changes from the first two commits get merged, I will rebase on top of master, and the previously third commit will now be the first. You can tell git-c2b to start updating branches starting from a particular number using the -n flag:

# The first two changes got merged, get new master and rebase on top of it
$ git fetch
$ git checkout feature
$ git rebase -i origin/master
...
# At this point the first two commits will be gone, so tell c2b to update
# feature-3 from the first commit, feature-4 from the second and so on.
$ git c2b -n 3
# Upload the remaining changes for review
$ git checkout feature-3
$ git cl upload --dependencies

Although the main driver for implementing git-c2b was improving my Chromium workflow, there is nothing Chromium-specific about this tool. It can be used as a general solution to create dependent branches from commits in any branch. Enjoy!

by Alexandros Frantzis at September 14, 2018 12:00 AM

September 03, 2018

Andrew Shadura

GNU indent 2.2.12

As the maintainer of GNU indent, I have just released version 2.2.12 (signature), the first release GNU indent saw in eight years.

Highlights include:

  • New options:
    • -pal / --pointer-align-left and-par/--pointer-align-right`
    • -fnc / --fix-nested-comment
    • -gts / --gettext-strings
    • -slc / --single-line-conditionals
    • -as / --align-with-spaces
    • -ut / --use-tabs
    • -nut / --no-tabs
    • -sar / --spaces-around-initializers
    • -ntac / --dont-tab-align-comments
  • C99 and C11 keywords and typeof are now recognised.
  • -linux preset now includes -nbs.
  • -kr preset now includes -par.
  • Lots of bug fixes

I’d like to thank all of the contributors of this release, most importantly:

  • Tim Hentenaar for all of the fixes and refactoring he’s done in his branch
  • Petr Písař, who maintains GNU indent in Red Hat and its derivatives, who’s submitted a lot of fixes and kept supporting users on the mailing list when I couldn’t
  • Santiago Vila, who maintains GNU indent in Debian
  • Daniel P. Valentine, who helped me a lot when I initially took over the maintenance of GNU indent
  • And lots of others who submitted their patches

by Andrej Shadura at September 03, 2018 11:24 AM

August 24, 2018

memcpy.io - Robert Foss

git reset upstream

robertfoss@xps9570 ~/work/libdrm $ git ru
remote: Counting objects: 234, done.
remote: Compressing objects: 100% (233/233), done.
remote: Total 234 (delta 177), reused 0 (delta 0)
Receiving objects: 100% (234/234), 53.20 KiB | 939.00 KiB/s, done.
Resolving deltas: 100% (177/177), completed with 36 local objects.
From git://anongit.freedesktop.org/mesa/drm
   cb592ac8166e..bcb9d976cd91  master     -> upstream/master
 * [new tag]                   libdrm-2.4.93 -> libdrm-2.4.93
 * [new tag]                   libdrm-2.4.94 -> libdrm-2.4.94

The idea here is that we by issuing a single short command can fetch the latest master branch from the upstream repository of the codebase we're working on and set our local master branch to point to the most recent upstream/master one.

This works by looking for a remote called upstream (or falling back to origin if it isn't found). And resetting the local master branch to point at the upstream/master branch.

~/.gitconfig

Add this snippet under the [alias] section of your ~/.gitconfig file.

[alias]
    ru = "!f() { \
        REMOTES=$(git remote); \
        REMOTE=\"origin\"; \
        case \"$REMOTES\" in \
            *upstream*) \
                REMOTE=\"upstream\"; \
                ;; \
        esac; \
        git fetch $REMOTE; \
        git update-ref refs/heads/master refs/remotes/$REMOTE/master; \
        git checkout master >/dev/null 2>&1; \
        git reset --hard $REMOTE/master >/dev/null 2>&1; \
        git checkout - >/dev/null 2>&1; \
    }; f

If you have a closer look, you'll notice that the upstream remote is used if has been added, otherwise the origin remote is used. This selection is done using git running a shell script.

Example

This is what git ru might look like when used.

robertfoss@xps9570 ~/work/libdrm $ git remote -v
origin  git@gitlab.collabora.com:robertfoss/libdrm.git (fetch)
origin  git@gitlab.collabora.com:robertfoss/libdrm.git (push)
upstream    git://anongit.freedesktop.org/mesa/drm (fetch)
upstream    git://anongit.freedesktop.org/mesa/drm (push)

robertfoss@xps9570 ~/work/libdrm $ git log --pretty=log --abbrev-commit
cb592ac8166e - (HEAD -> master, upstream/master, tag: libdrm-2.4.92) bump version for release (4 months ago) <Rob Clark>
c5a656818492 - freedreno: add fd_pipe refcounting (4 months ago) <Rob Clark>
1ac3ecde2f2c - intel: add support for ICL 11 (4 months ago) <Paulo Zanoni>
bc9c789073c8 - amdgpu: Deinitialize vamgr_high{,_32} (4 months ago) <Michel Dänzer>
[snip]

robertfoss@xps9570 ~/work/libdrm $ git ru
remote: Counting objects: 234, done.
remote: Compressing objects: 100% (233/233), done.
remote: Total 234 (delta 177), reused 0 (delta 0)
Receiving objects: 100% (234/234), 53.20 KiB | 939.00 KiB/s, done.
Resolving deltas: 100% (177/177), completed with 36 local objects.
From git://anongit.freedesktop.org/mesa/drm
   cb592ac8166e..bcb9d976cd91  master     -> upstream/master
 * [new tag]                   libdrm-2.4.93 -> libdrm-2.4.93
 * [new tag]                   libdrm-2.4.94 -> libdrm-2.4.94

robertfoss@xps9570 ~/work/libdrm $ git log --pretty=log --abbrev-commit
cb9d976cd91 - (HEAD -> master, upstream/master) xf86drm: fallback to normal path when realpath fails (4 hours ago) <Emil Velikov>
8389c5454804 - (tag: libdrm-2.4.94) Bump to version 2.4.94 (19 hours ago) <Kristian H. Kristensen>
f0c642e8df41 - libdrm: add msm drm uapi header (25 hours ago) <Tanmay Shah>
[snip]

Thanks

This post has been a part of work undertaken by my employer Collabora.

And thanks @widawsky for pointing out some improvements.

by Robert Foss at August 24, 2018 10:32 AM

August 22, 2018

Andrew Shadura

Linux Vacation Eastern Europe 2018

On Friday, I will be attending LVEE (Linux Vacation Eastern Europe) once again after a few years of missing it for various reasons. I will be presenting a talk on my experience of working with LAVA; the talk is based on a talk given by my colleague Guillaume Tucker, who helped me a lot when I was ramping up on LAVA.

Since the conference is not well known outside, well, a part of Eastern Europe, I decided I need to write a bit on it. According to the organisers, they had the idea of having a Linux conference after the newly reborn Minsk Linux User Group organised quite a successful celebration of the ten years anniversary of Debian, and they wanted to have even a bigger event. The first LVEE took place in 2005 in a middle of a forest near Hrodna.

LVEE 2005 group photo

As the name suggests, this conference is quite different from many other conferences, and it is actually a bit close in spirit to the Linux Bier Wanderung. The conference is very informal, it happens basically in a middle of nowhere (until 2010, the Internet connection was very slow and unreliable or absent), and there’s a massive evening programme every evening with beer, shashlyk and a lot of chatting.

My first LVEE was in 2009, and it was, in fact, my first Linux conference. The venue for LVEE has traditionally been a tourist camp in a forest. For those unfamiliar with the concept, a tourist camp (at least in the post-Soviet countries) is an accommodation facility usually providing a bare minimum comfort; people are normally staying in huts or small houses with shared facilities, often located outside.

Houses part of the tourist camp Another house part of the tourist camp

When the weather permits (which usually is defined as: not raining), talks are usually held outside. When it starts raining, they move inside one of the houses which is big enough to accommodate most of the people interested in talks.

Grigory Zlobin in front of a house talks about FOSS in education in Ukraine

Some participants prefer to stay in tents:

Tents of some of the participants

People not interested in talks organise impromptu open-air hacklabs:

Impromptu open-air hacklab

Or take a swim in a lake:

Person standing on a pier by a lake

Of course, each conference day is followed by shashlyks and beer:

Shashlyks work in progress

And, on the final day of the conference, cake!

LVEE cake

This year, for the first time LVEE is being sponsored by Collabora and Red Hat.

The talks are usually in Russian (with slides usually being in English), but even if you don’t speak Russian and want to attend, fear not: most of the participants speak English to some degree, so you will unlikely feel isolated. If enough English-speaking participants sign up, it is possible that we can organise some aids (e.g. translated subtitles) to make both people not speaking English and not speaking Russian feel at home.

I hope to see some of the readers at LVEE next time :)

by Andrej Shadura at August 22, 2018 11:24 AM

August 07, 2018

Lucas Kanashiro

DebCamp and DebConf 18 summary

Come as no surprise, Debcamp and Debconf 18 were amazing! I worked on many things that I had not had enough time to accomplish before; also I had the opportunity to meet old friends and new people. Finally, I engaged in important discussions regarding the Debian project.

Based on my last blog post, follows what I did in Hsinchu during these days.

  • The Debconf 19 website has an initial version running \o/ I want to thank Valessio Brito and Arthur Del Esposte for helping me build this first version, and also thank tumblingweed for your explanation about how wafer works.

  • The Perl Team Rolling Sprint was really nice! Four people participated, and we were able to get a bunch of things done, you can see the full report here.

  • Arthur Del Esposte (my GSoC intern) made some improvements on his work, and also collected some feedbacks from others developers. I hope he will blog post these things soon. You can find his presentation about his GSoC project here; he is the first student in the video :)

  • I worked on some Ruby packages. I uploaded some new dependencies of Rails 5 to unstable (which Praveen et al. were already working on them). I hope we can make Rails 5 package available in experimental soon, and ship it in the next Debian stable release. I also discussed about Redmine package with Duck (Redmine’s co-maintainer) but did not manage to work on it.

Besides the technical part, this was my first time in Asia! I loved the architecture, despite the tight streets, the night markets the temples and so on. Some pictures that I took below:

And in order to provide a great experience for the Debian community next year in Curitiba - Brazil, we already started to prepare the ground for you :)

See you all next year in Curitiba!

August 07, 2018 07:54 PM

July 31, 2018

Erik Faye-Lund

Working at Collabora

In the Fuse Open post, I mentioned that I would no longer be working at Fuse. I didn’t mention what I was going to do next, and now that it’s been a while I guess it’s time to let the cat out of the bag: I’ve started working at Collabora.

I’ve been working here for 1.5 months now, and I’m really enjoying it so far! I get to work on things I really enjoy, and I get a lot of freedom! :smile: :tada:

What is Collabora

Collabora is an Open Source consultancy, specializing in a few industries. Most of what Collabora does is centered around things like Linux, automotive, embedded systems, and multimedia. You can read more about Collabora here.

The word “consultant” brings out quite a lot of stereotypes in my mind. Luckily, we’re not that kind of consultants. I haven’t worn a tie a single day at work yet!

When I got approached by Collabora, I was immediately attracted by the prospect of working more or less exclusively on Open Source Software. Collabora has “Open First” as their motto, and this fits my ideology very well! And trust me, Collabora really means it! :grinning:

What will I be doing?

I’m hired as a Principal Engineer on the Graphics team. This obviously means I’ll be working on graphics technology.

So far, I’ve been working a bit on some R&D tasks about Vulkan, but mostly on Virgil 3D (“VirGL”). If you don’t know what VirGL is, the very short explanation is that it’s GPU virtualization for virtual machines. I’ve been working on adding/fixing support for OpenGL 4.3 as well as OpenGL ES 3.1. The work isn’t complete but it’s getting close, and patches are being upstreamed as I write this.

I’m also working on Mesa. Currently mostly through Virgil, probably through other projects in the future as well. Apart from that, things depend heavily on customer needs.

Working Remotely

A big change from my previous jobs, is that I now work from home Instead of a shared office with my coworkers. This is because Collabora doesn’t have an Oslo office, as it’s largely a distributed team.

I’ve been doing this for around 1.5 months already, and it works a lot better than I feared. In fact, this was one of my biggest worries with taking this job, but so far it hasn’t been a problem at all! :tada:

But who knows, maybe all work and no play will make Jack a dull boy in the end? :knife:

Jokes aside, if this turns out to be a problem in the long term, I’ll look into getting a desk at some co-working space. There’s tons of them nearby.

Working as a Contractor

Another effect of Collabora not having an Oslo office means that I have to formally work as a contractor. This is mostly a formality (Collabora seems to treat people the same regardless if they are normal employees or contractors), but there’s quite a lot of legal challenges on my end due to this.

I would definitely have preferred normal employment, but I guess I don’t get to choose all the details ;)

Closing

So, this is what I’m doing now. I’m happy with my choice and I have a lot of really great colleagues! I also get to work with a huge community, and as part of that I’ll be going to more conferences going forward (next up: XDC)!

by Erik Faye-Lund at July 31, 2018 10:01 AM

memcpy.io - Robert Foss

kms_swrast: A hardware-backed graphics driver

Stack overview

Let's start with having a look at a high level overview of what the graphics stack looks like.

Alt text

Before digging too much further into this, lets cover some terminology.

DRM - Direct Rendering Manager - is the Linux kernel graphics subsystem, which contains all of the graphics drivers and does all of the interfacing with hardware.
The DRM subsystem implements the KMS - kernel mode setting - API.

Mode setting is essentially configuring output settings like resolution for the displays that are being used. And doing it using the kernel means that userspace doesn't need access to setting these things directly.

Alt text

The DRM subsystem talks to the hardware and Mesa is used by applications through the APIs it implements. APIs like OpenGL, OpenGL ES, Vulkan, etc. All of Mesa is built ontop of DRM and libdrm.

libdrm is a userspace library that wraps the DRM subsystem in order to simplify talking to drivers and avoiding common bugs in every user of DRM.

Alt text

Looking inside Mesa we find the Gallium driver framework. It is what most of the Mesa drivers are built using, with the Intel i965 driver being the major exception.

kms_swrast is built using Gallium, with the intention of re-using as much of the infrastructure provided by Gallium and KMS as possible instead.

kms_swrast itself is backed by a backend, like softpipe or the faster llvmpipe, which actually implements the 3D primitives and functionality needed in order to reach OpenGL and OpenGL ES compliance.

Softpipe is the older and less complicated of the two implementations, whereas is llvmpipe is newer and relies on LLVM as an external dependency.
But as a result llvmpipe support JIT-compilation for example, which makes it a lot faster.

Why is this a good idea?

Re-using the Gallium framework gives you a lot of things for free. And the driver can remain relatively lightweight.

Apart from the features that Gallium provides today, you'll also get free access to new features in the future, without having to write them yourself.
And since Gallium is shared between many drivers, it will be better tested and have fewer bugs than any one driver.

kms_swrast is built using DRM and actual kernel drivers, but no rendering hardware is actually used. Which may seem a bit odd.

So why are the kernel drivers used for a software renderer? The answer is two-fold.

It is what Gallium expects, and there is a kernel driver called VGEM (Virtual GEM) which was created specifically for this usecase. In order to not have to make invasive changes to it or make the switch to VGEM right away, just providing it with access to some driver is the simplest possible solution. Since the actual hardware is mostly unused, it doesn't really matter what hardware you use.

The DRM driver is actually only used for a single thing, to allocate a slice of memory which can be used to render pixels to and then be sent to the display.

Thanks

This post has been a part of work undertaken by my employer Collabora.

by Robert Foss at July 31, 2018 07:14 AM

July 29, 2018

Daniel Stone

Introducing freedesktop.org GitLab

This is quite a long post. The executive summary is that freedesktop.org now hosts an instance of GitLab, which is generally available and now our preferred platform for hosting going forward. We think it offers a vastly better service, and we needed to do it in order to offer the projects we host the modern workflows they have been asking for.

In parallel, we’re working on making our governance, including policies, processes and decision making, much more transparent.

Some history

Founded by Havoc Pennington in 2000, freedesktop.org is now old enough to vote. From the initial development of the cross-desktop XDG specs, to supporting critical infrastructure such as NetworkManager, and now as the home to open-source graphics development (the kernel DRM tree, Mesa, Wayland, X.Org, and more), it’s long been a good home to a lot of good work.

We don’t provide day-to-day technical direction or enforce set rules: it’s a very loose collection of projects which we each trust to do their own thing, some with nothing in common but where they’re hosted.

Unfortunately, that hosting hasn’t really grown up a lot since the turn of the millennium. Our account system was forked (and subsequently heavily hacked) from Debian’s old LDAP-based system in 2004. Everyone needing direct Git commit access to projects, or the ability to upload to web space, has to file a bug in Bugzilla, where after a trip through the project maintainer, eventually an admin will get around to pulling their SSH and GPG (!) keys and adding an account by hand.

Similarly, creating or reconfiguring a Git repository also requires manual admin intervention, where on request one of us will SSH into the Git server and do whatever is required. Beyond Git and cgit for viewing, we provide Bugzilla for issue tracking, Mailman and Patchwork for code review and discussion, and ikiwiki for tracking. For our sins, we also have an FTP server running somewhere. None of these services are really integrated with each other; separate accounts and separate sets of permissions are required.

Maintaining these disparate services is a burden on both admins and projects. Projects are frequently blocked on admins adding users and changing their SSH keys, changing Git hooks, adding people to Patchwork, manually applying more duct tape to the integration between these services, and fixing the duct tape when it breaks (which is surprisingly often). As a volunteer admin for the service, doing these kinds of things is not exactly the reason we get out of bed in the morning; it also consumes so much time treading water that we haven’t been able to enable new features and workflows for the projects we host.

Seeking better workflows

As of writing, around one third of the non-dormant projects on fd.o have at some point migrated their development elsewhere; mostly to GitHub. Sometimes this was because the other sites were a more natural home (e.g. to sibling projects), and sometimes just because they offered a better workflow (integration between issue tracking and commits, web-based code review, etc). Other projects which would have found fd.o a natural home have gone straight to hosting externally, though they may use some of our services - particularly mailing lists.

Not everyone wants to make use of these features, and not everyone will. For example, the kernel might well never move away from email for issue tracking and code review. But the evidence shows us that many others do want to, and our platform will be a non-starter for them unless we provide the services they want.

A bit over three years ago, I set up an instance of Phabricator at Collabora to replace our mix of Bugzilla, Redmine, Trac, and JIRA. It was a great fit for how we worked internally, and upstream seemed like a good fit too; though they were laser-focused on their usecases, their extremely solid data storage and processing model made it quite easy to extend, and projects like MediaWiki, Haskell, LLVM and more were beginning to switch over to use it as their tracker. I set up an instance on fd.o, and we started to use it for a couple of trial projects: some issue tracking and code review for Wayland and Weston, development of PiTiVi, and so on.

The first point we seriously discussed it more widely was at XDC 2016 in Helsinki, where Eric Anholt gave a talk about our broken infrastructure, cleverly disguised as something about test suites. It became clear that we had wide interest in and support for better infrastructure, though with some reservation about particular workflows. There was quite a bit of hallway discussion afterwards, as Eric and Adam Jackson in particular tried out Phabricator and gave some really good feedback on its usability. At that point, it was clear that some fairly major UI changes were required to make it usable for our needs, especially for drive-by contributors and new users.

Last year, GNOME went through a similar process. With Carlos and some of the other members being more familiar with GitLab, myself and Emmanuele Bassi made the case for using Phabricator, based on our experiences with it at Collabora and Endless respectively. At the time, our view was that whilst GitLab’s code review was better, the issue tracking (being much like GitHub’s) would not really scale to our needs. This was mostly based on having last evaluated GitLab during the 8.x series; whilst the discussions were going on, GitLab were making giant strides in issue tracking throughout 9.x.

With GitLab coming up to par on issue tracking, both Emmanuele and I ended up fully supporting GNOME’s decision to base their infrastructure on GitLab. The UI changes required to Phabricator were not really tractable for the resources we had, the code review was and will always be fundamentally unsuitable being based around the Subversion-like model of reviewing large branches in one go, and upstream were also beginning to move to a much more closed community model.

gitlab.freedesktop.org

By contrast, one of the things which really impressed us about GitLab was how openly they worked, and how open they were to collaboration. Early on in GNOME’s journey to GitLab, they dropped their old CLA to replace it with a DCO, and Eliran Mesika from GitLab’s partnership team came to GUADEC to listen and understand how GNOME worked and what they needed from GitLab. Unfortunately this was too early in the process for us, but Robert McQueen later introduced us, and Eliran and I started talking about how they could help freedesktop.org.

One of our bigger issues was infrastructure. Not only were our services getting long in the tooth, but so were the machines they ran on. In order to stand up a large new service, we’d need new physical machines, but a fleet of new machines was beyond the admin time we had. It also didn’t solve issues such as everyone’s favourite: half of Europe can’t route to fd.o for half an hour most mornings due to obscure network issues with our host we’ve had no success diagnosing or fixing.

GitLab Inc. listened to our predicament and suggested a solution to help us: that they would sponsor our hosting on Google Cloud Platform for an initial period to get us on our feet. This involves us running the completely open-source GitLab Community Edition on infrastructure we control ourselves, whilst freeing us from having to worry about failing and full disks or creaking networks. (As with GNOME, we politely declined the offer of a license to the pay-for GitLab Enterprise Edition; we wanted to be fully in control of our infrastructure, and on a level playing field with the rest of the open-source community.)

They have also offered us support, from helping a cloud idiot understand how to deploy and maintain services on Kubernetes, to taking the time to listen and understand our workflows and improve GitLab for our uses. Much of the fruit of this is already visible in GitLab through feedback from us and GNOME, though there is always more to come. In particular, one area we’re looking at is integration with mailing lists and placing tags in commit messages, so developers used to mail-based workflows can continue to consume the firehose through email, rather than being required to use the web UI for everything.

Last Christmas, we gave ourselves the present of standing up gitlab.freedesktop.org on GCP, and set about gradually making it usable and maintainable for our projects. Our first hosted project was Panfrost, who were running on either non-free services or non-collaborative hosted services. We wanted to help them out by getting them on to fd.o, but they didn’t want to use the services we had at the time, and we didn’t want to add new projects to those services anyway.

Over time, as we stabilised the deployment and fleshed out the feature set, we added a few smaller projects, who understood the experimental nature and gave us space to make some mistakes, have some down time, and helped us smooth out the rough edges. Some of the blocker here was migrating bugs: though we reused GNOME’s bztogl script, we needed some adjustments for our different setups, as well as various bugfixes.

Not long ago, we migrated Mesa’s repository hosting as well as Wayland and Weston for both repository tracking and issue tracking which are our biggest projects to date.

What we offer to projects

With GitLab, we offer everything you would expect from gitlab.com (their hosted offering), or everything you would expect from GitHub with the usual external services such as Travis CI. This includes issue tracking integrated with repository management (close issues by pushing), merge requests with online review and merge, a comprehensive CI suite with shared runners available to all, custom sites built with whatever toolchain you like, external web hooks to integrate with other services, and a well-documented stable API which allows you to use external clients like git lab.

In theory, we’ve always provided most of the above services. Most of these - if you ignore the lack of integration between them - were more or less fine for projects running their own standalone infrastructure. But they didn’t scale to something like fd.o, where we have a very disparate family of projects sharing little in common, least of all common infrastructure and practices. For example, we did have a Jenkins deployment for a while, but it became very clear very early that this did not scale out to fd.o: it was impossible for us to empower projects to run their own CI without fatally compromising security.

Anyone familiar with the long wait for an admin to add an account or change an SSH key will be relieved to hear that this is no longer. Anyone can make an account on our GitLab instance using an email address and password, or with trusted external identity providers (currently Google, gitlab.com, GitHub, or Twitter) rather than having another username and password. We delegate permission management to project owners: if you want to give someone commit rights to your project, go right ahead. No need to wait for us.

We also support such incredible leading-edge security features as two-factor TOTP authentication for your account, Recaptcha to protect against spammers, and ways of deleting spam which don’t involve an admin sighing into a SQL console for half an hour, trying to not accidentally delete all the content.

Having an integrated CI system allows our projects to run test pipelines on merge requests, giving people fast feedback about any required changes without human intervention, and making sure distcheck works all the time, rather than just the week before release. We can capture and store logs, binaries and more as artifacts.

The same powerful system is also the engine for GitLab Pages: you can use static site generators like Jekyll and Hugo, or have a very spartan, hand-written site but also host auto-generated documentation. The choice is yours: running everything in (largely) isolated containers means that you can again do whatever you like with your own sites, without having to ask admins to set up some duct-taped triggers from Git repositories, then ask them to fix it when they’ve upgraded Python and everything has mysteriously stopped working.

Migration to GitLab, and legacy services

Now that we have a decent and battle-tested service to offer, we can look to what this means for our other services.

Phabricator will be decommissioned immediately; a read-only archive will be taken of public issues and code reviews and maintained as static pages forever, and a database dump will also be kept. But we do not plan to bring this service back, as all the projects using it have already migrated away from it.

Similarly, Jenkins has already been decommissioned and deactivated some time ago.

Whilst we are encouraging projects to migrate their issue tracking away from Bugzilla and helping those who do, we realise a lot of projects have built their workflows around Bugzilla. We will continue to maintain our Bugzilla installation and support existing projects with its use, though we are not offering Bugzilla to new projects anymore, and over the long term would like to see Bugzilla eventually retired.

Patchwork (already currently maintained by Intel for their KMS and Mesa work) is in the same boat, complicated by the fact that the kernel might never move away from patches carved into stone tablets.

Hopefully it goes without saying that our mailing lists are going to be long-lived, even if better issue tracking and code review does mean they’re a little less-trafficked than before.

Perhaps most importantly, we have anongit and cgit. anongit is not provided by GitLab, as they rightly prefer to serve repositories over https. Given that, for all existing projects we are maintaining anongit.fd.o as a read-only mirror of GitLab; there are far too many distributions, build scripts, and users out there with anongit URIs to discontinue the service. Over time we will encourage these downstreams to move to HTTPS to lessen the pressure, but this will continue to live for quite some time. Having cgit live alongside anongit is fairly painless, so we will keep it running whilst it isn’t a burden.

Lastly, annarchy.fd.o (aka people.fd.o) is currently offered as a general-purpose shell host. People use this to manage their Git repositories on people.fd.o and their files publicly served there. Since it is also the primary web host for most projects, both people and scripts use it to deploy files to sites. Some people use it for random personal file storage, to run various scripts and even as a personal IRC host. We are trying to transition these people away from using annarchy for this, as it is difficult for us to provide totally arbitrary resources to everyone who has at one point had an account with one of our member projects. Running a source of lots of IRC traffic is also a good way to make yourself deeply unpopular with many hosts.

Migrating your projects

After being iterated and fleshed out, we are happy to offer to migrate all the projects. For each project, we will ask you to file an issue using the migration template. This gives you a checklist with all the information we need to migrate your GitLab repositories, as well as your existing Bugzilla bugs.

Every user with a freedesktop.org SSH account already has an account created for them on GitLab, with access to the same groups. In order to recover access to the migrated accounts, you can request a password-reset link by entering the email address you signed up with into the ‘forgotten password’ box on the GitLab front page.

More information is available on the freedesktop GitLab wiki, and of course the admins are happy to help if you have any problems with this. The usual failure mode is that your email address has changed since you signed up: we’ve had one user who needed it changed as they were still using a Yahoo! mail address.

Governance and process

Away from technical issues, we’re also looking to inject a lot more transparency into our processes. For instance, why do we host kernel graphics development, but not new filesystems? What do we look for (both good and bad), and why is that? What is freedesktop.org even for, and who is it serving?

This has just been folk knowledge for some time; passed on by oral legend over IRC as verbal errata to out-of-date wiki pages. Just as with technical issues, this is not healthy for anyone: it’s more difficult for people to get involved and give us the help we so clearly need, it’s more difficult for our community to find out what they can expect from us and how we can help them, and it’s impossible for anyone to measure how good a job we’re actually doing.

One of the reasons we haven’t done a great job at this is because just keeping the Rube Goldberg machine of our infrastructure running exhausts basically all the time we have to deal with fd.o. The time we spend changing someone’s SSH keys by hand, or debugging a Git post-receive hook, is time we’re not spending on the care and feeding of our community.

We’ve spent the past couple of years paying down our technical debt, and the community equivalent thereof. Our infrastructure is much less error-prone than it was: we’ve gone from fighting fires to being able to prepare the new GitLab infrastructure and spend time shepherding projects through it. Now that we have a fair few projects on GitLab and they’ve been able to serve themselves, we’ve been able to take some time for community issues.

Writing down our processes is still a work in progress, but something we’ve made a little more headway on is governance. Currently fd.o’s governance is myself, Keith and Tollef discussing things and coming to some kind of conclusion. Sometimes that’s in recorded public fora, sometimes over email with searchable archives, sometimes just over IRC message or verbally with no public record of what happened.

Given that there’s a huge overlap between our mission and that of the X.Org Foundation (which is a lot more than just X11!), one idea we’re exploring is to bring fd.o under the Foundation’s oversight, with clear responsibility, accountability, and delegated roles. The combination of the two should give our community much more insight into what we’re doing and why - as well as, crucially, the chance to influence it.

Of course, this is all conditional on fd.o speaking to our member projects, and the Foundation speaking to its individual members, and getting wide agreement. There will be a lot of tuning required - not least, the Foundation’s bylaws would need a change which needs a formal vote from the membership - but this at least seems like a promising avenue.

July 29, 2018 03:39 PM

July 19, 2018

Lucas Kanashiro

My DebCamp/DebConf 18 plans


Tomorrow I am going to another DebCamp and DebConf; this time at Hsinchu, Taiwan. Thanks to Debian project, I received a sponsor to attend the event, in this sense I plan to do the following contributions:

  • Bootstrap the DebConf 19 website. I volunteered myself to lead the DebConf 19 website things, and to do that I intend to get in touch with more experienced people from the DebConf team.

  • Participate part-time at the Perl team sprint. Despite I have not been so active in the team as I used to be, I’ll try to use the opportunity to help with packages update and some bug fixing.

  • Keep working with Arthur Del Esposte in our GSoC project, which aims to improving distro-tracker to better support Debian teams workflow. Also, prepare him to make an excellent presentation in the GSoC session. Hope see you there!

  • If I have enough time I want to work on some of my packages too, specially Redmine.

If anyone is interested in what I’ll do these days just reach me out! Could be in person, via IRC (my nickname: kanashiro) or just mail me (kanashiro@debian.org).

I hope meet you soon in Hsinchu!

July 19, 2018 01:30 PM

July 13, 2018

Andrew Shadura

Upcoming git-crecord release

More than 1½ years since the first release of git-crecord, I’m preparing a big update. Not aware how exactly many people are using it, I neglected the maintenance for some time, but last month I’ve decided I need to take action and fix some issues I’ve known since the first release.

First of all, I’ve fixed a few minor issues with setup.py-based installer some users reported.

Second, I’ve ported a batch of updates from a another crecord derivative merged into Mercurial. That also brought some updates to the bits of Mercurial code git-crecord is using.

Third, long waited Python 3 support is here. I’m afraid at the moment I cannot guarantee support of patches in encodings other than the locale’s one, but if that turns out to be a needed feature, I can think about implementing it.

Fourth, missing staging and unstaging functionality is being implemented, subject to the availability of free time during the holiday :)

The project is currently hosted at GitHub: https://github.com/andrewshadura/git-crecord.

P.S. In case you’d like to support me hacking on git-crecord, or any other of my free software activities, you can tip my Patreon account.

by Andrej Shadura at July 13, 2018 11:10 AM

June 15, 2018

Andrew Shadura

Working in open source: part 1

Three years ago on this day I joined Collabora to work on free software full-time. It still feels a bit like yesterday, despite so much time passing since then. In this post, I’m going to reconstruct the events of that year.

Back in 2015, I worked for Alcatel-Lucent, who had a branch in Bratislava. I can’t say I didn’t like my job — quite contrary, I found it quite exciting: I worked with mobile technologies such as 3G and LTE, I had really knowledgeable and smart colleagues, and it was the first ‘real’ job (not counting the small business my father and I ran) where using Linux for development was not only not frowned upon, but was a mandatory part of the standard workflow, and running it on your workstation was common too, even though not official.

However, after working for Alcatel-Lucent for a year, I found I don’t like some of the things about this job. We developed proprietary software for the routers and gateways the company produced, and despite the fact we used quite a lot of open source libraries and free software tools, we very rarely contributed anything back, and if this happened at all, it usually happened unofficially and not on the company’s time. Each time I tried to suggest we need to upstream our local changes so that we don’t have to maintain three different patchsets for different upstream versions ourselves, I was told I know nothing about how the business works, and that doing that would give up the control on the code, and we can’t do that. At the same time, we had no issue incorporating permissively-licensed free software code. The more I worked at Alcatel-Lucent, the more I felt I am just getting useless knowledge of a proprietary product I will never be able to reuse once and if I leave the company. At some point, in a discussion at work someone said that doing software development (including my free software work) even on my free time may constitute a conflict of interests, and the company may be unhappy about it. Add to that that despite relatively flexible hours, working from home was almost never allowed, as was working from other offices of the company.

These were the major reasons I quit my job at Alcatel-Lucent, and my last day was 10 April 2018. Luckily, we reached an agreement that I will still get my normal pay while on the notice period despite not actually going to the office or doing any work, which allowed me to enjoy two months of working on my hobby projects while not having to worry about money.

To be honest, I don’t want to seem like I quit my job just because it was all proprietary software, and I did plan to live from donations or something, it wasn’t quite like that. While still working for Alcatel-Lucent, I was offered a job which was developing real-time software running inside the Linux kernel. While I have declined this job offer, mostly because it was a small company with less than a dozen employees, and I would need to take over the responsibility for a huge piece of code — which was, in fact, also proprietary, this job offer taught me this thing: there were jobs out there where my knowledge of Linux was of an actual use, even in the city I lived in. The other thing I learnt was this: there were remote Linux jobs too, but I needed to become self-employed to be able to take them, since my immigration status at the moment didn’t allow me to be employed abroad.

Picture of the business license. Text in Slovak: ‘Osvedčenie o živnostenskom opravnení. Andrei Shadura’.

The business license I received within a few days of quitting my job

Feeling free as a bird, having the business registered, I’ve spent two months hacking, relaxing, travelling to places in Slovakia and Ukraine, and thinking about how am I going to earn money when my two months vacation ends.

A street in Trenčín; the castle can be seen above the building’s roof.

In Trenčín

The obvious idea was to consult, but that wouldn’t guarantee me constant income. I could consult on Debian or Linux in general, or on version control systems — in 2015 I was an active member of the Kallithea project and I believed I could help companies migrate from CVS and Subversion to Mercurial and Git hosted internally on Kallithea. (I’ve actually also got a job offer from Unity Technologies to hack on Kallithea and related tools, but I had to decline it since it would require moving to Copenhagen, which I wasn’t ready for, despite liking the place when I visited them in May 2015.)

Another obvious idea was working for Red Hat, but knowing how slow their HR department was, I didn’t put too much hope into it. Besides, when I contacted them, they said they need to get an approval for me to work for them remotely and as a self-employed, lowering my chances on getting a job there without having to relocate to Brno or elsewhere.

At some point, reading Debian Planet, I found a blog post by Simon McVittie on polkit, in which he mentioned Collabora. Soon, I applied, had my interviews and a job offer.

To be continued later…

by Andrej Shadura at June 15, 2018 02:08 PM

May 20, 2018

Andrew Shadura

Porting inputplug to XCB

5 years ago I wrote inputplug, a tiny daemon which connects to your X server and monitors its input devices, running an external command each time a device is connected or disconnected.

I have used a custom keyboard layout and a fairly non-standard settings for my pointing devices since 2012. I always annoyed me those settings would be re-set every time the device was disconnected and reconnected again, for example, when the laptop was brought back up from the suspend mode. I usually solved that by putting commands to reconfigure my input settings into the resume hook scripts, but that obviously didn’t solve the case of connecting external keyboards and mice. At some point those hook scripts stopped to work because they would run too early when the keyboard and mice were not they yet, so I decided to write inputplug.

Inputplug was the first program I ever wrote which used X at a low level, and I had to use Xlib to access the low-level features I needed. More specifically, inputplug uses XInput X extension and listens to XIHierarchyChanged events. In June 2014, Vincent Bernat contributed a patch to rely on XInput2 only.

During the MiniDebCamp, I had a typical case of yak shaving despite not having any yaks around: I wanted to migrate inputplug’s packaging from Alioth to Salsa, and I had an idea to update the package itself as well. I had an idea of adding optional systemd user session integration, and the easiest way to do that would be to have inputplug register a D-Bus service. However, if I just registered the service, introspecting it would cause annoying delays since it wouldn’t respond to any of the messages the clients would send to it. Handling messages would require me to integrate polling into the event loop, and it turned out it’s not easy to do while sticking to Xlib, so I decided to try and port inputplug to XCB.

For those unfamiliar with XCB, here’s a bit of background: XCB is a library which implements the X11 protocol and operates on a slightly lower level than Xlib. Unlike Xlib, it only works with structures which map directly to the wire protocol. The functions XCB provides are really atomic: in Xlib, it not unusual for a function to perform multiple X transactions or to juggle the elements of the structures a bit. In XCB, most of the functions are relatively thin wrappers to enable packing and unpacking of the data. Let me give you an example.

In Xlib, if you wanted to check whether the X server supports a specific extension, you would write something like this:

XQueryExtension(display, "XInputExtension", &xi_opcode, &event, &error)

Internally, XQueryExtension would send a QueryExtension request to the X server, wait for a reply, parse the reply and return the major opcode, the first event code and the first error code.

With XCB, you need to separately send the request, receive the reply and fetch the data you need from the structure you get:

const char ext[] = "XInputExtension";

xcb_query_extension_cookie_t qe_cookie;
qe_cookie = xcb_query_extension(conn, strlen(ext), ext);

xcb_query_extension_reply_t *rep;
rep = xcb_query_extension_reply(conn, qe_cookie, NULL);

At this point, rep has its field preset set to true if the extension is present. The rest of the things are in the structure as well, which you have to free yourself after the use.

Things get a bit more tricky with requests returning arrays, like XIQueryDevice. Since the xcb_input_xi_query_device_reply_t structure is difficult to parse manually, XCB provides an iterator, xcb_input_xi_device_info_iterator_t which you can use to iterate over the structure: xcb_input_xi_device_info_next does the necessary parsing and moves the pointer so that each time it is run the iterator points to the next element.

Since replies in the X protocol can have variable-length elements, e.g. device names, XCB also provides wrappers to make accessing them easier, like xcb_input_xi_device_info_name.

Most of the code of XCB is generated: there is an XML description of the X protocol which is used in the build process, and the C code to parse and generate the X protocol packets is generated each time the library is built. This means, unfortunately, that the documentation is quite useless, and there aren’t many examples online, especially if you’re going to use rarely used functions like XInput hierarchy change events.

I decided to do the porting the hard way, changing Xlib calls to XCB calls one by one, but there’s an easier way: since Xlib is now actually based on XCB, you can #include <X11/Xlib-xcb.h> and use XGetXCBConnection to get an XCB connection object corresponding to the Xlib’s Display object. Doing that means there will still be a single X connection, and you will be able to mix Xlib and XCB calls.

When porting, it often is useful to have a look at the sources of Xlib: it becomes obvious what XCB functions to use when you know what Xlib does internally (thanks to Mike Gabriel for pointing this out!).

Another thing to remember is that the constants and enums Xlib and XCB define usually have the same values (mandated by the X protocol) despite having slightly different names, so you can mix them too. For example, since inputplug passes the XInput event names to the command it runs, I decided to keep the names as Xlib defines them, and since I’m creating the corresponding strings by using a C preprocessor macro, it was easier for me to keep using XInput2.h instead of defining those strings by hand.

If you’re interested in the result of this porting effort, have a look at the code in the Mercurial repo. Unfortunately, it cannot be packaged for Debian yet since the Debian package for XCB doesn’t ship the module for XInput (see bug #733227).

P.S. Thanks again to Mike Gabriel for providing me important help — and explaining where to look for more of it ;)

by Andrej Shadura at May 20, 2018 07:50 PM

Porting inputplug to XCB

5 years ago I wrote inputplug, a tiny daemon which connects to your X server and monitors its input devices, running an external command each time a device is connected or disconnected.

I have used a custom keyboard layout and a fairly non-standard settings for my pointing devices since 2012. I always annoyed me those settings would be re-set every time the device was disconnected and reconnected again, for example, when the laptop was brought back up from the suspend mode. I usually solved that by putting commands to reconfigure my input settings into the resume hook scripts, but that obviously didn’t solve the case of connecting external keyboards and mice. At some point those hook scripts stopped to work because they would run too early when the keyboard and mice were not they yet, so I decided to write inputplug.

Inputplug was the first program I ever wrote which used X at a low level, and I had to use Xlib to access the low-level features I needed. More specifically, inputplug uses XInput X extension and listens to XIHierarchyChanged events. In June 2014, Vincent Bernat contributed a patch to rely on XInput2 only.

During the MiniDebCamp, I had a typical case of yak shaving despite not having any yaks around: I wanted to migrate inputplug’s packaging from Alioth to Salsa, and I had an idea to update the package itself as well. I had an idea of adding optional systemd user session integration, and the easiest way to do that would be to have inputplug register a D-Bus service. However, if I just registered the service, introspecting it would cause annoying delays since it wouldn’t respond to any of the messages the clients would send to it. Handling messages would require me to integrate polling into the event loop, and it turned out it’s not easy to do while sticking to Xlib, so I decided to try and port inputplug to XCB.

For those unfamiliar with XCB, here’s a bit of background: XCB is a library which implements the X11 protocol and operates on a slightly lower level than Xlib. Unlike Xlib, it only works with structures which map directly to the wire protocol. The functions XCB provides are really atomic: in Xlib, it not unusual for a function to perform multiple X transactions or to juggle the elements of the structures a bit. In XCB, most of the functions are relatively thin wrappers to enable packing and unpacking of the data. Let me give you an example.

In Xlib, if you wanted to check whether the X server supports a specific extension, you would write something like this:

XQueryExtension(display, "XInputExtension", &xi_opcode, &event, &error)

Internally, XQueryExtension would send a QueryExtension request to the X server, wait for a reply, parse the reply and return the major opcode, the first event code and the first error code.

With XCB, you need to separately send the request, receive the reply and fetch the data you need from the structure you get:

const char ext[] = "XInputExtension";

xcb_query_extension_cookie_t qe_cookie;
qe_cookie = xcb_query_extension(conn, strlen(ext), ext);

xcb_query_extension_reply_t *rep;
rep = xcb_query_extension_reply(conn, qe_cookie, NULL);

At this point, rep has its field preset set to true if the extension is present. The rest of the things are in the structure as well, which you have to free yourself after the use.

Things get a bit more tricky with requests returning arrays, like XIQueryDevice. Since the xcb_input_xi_query_device_reply_t structure is difficult to parse manually, XCB provides an iterator, xcb_input_xi_device_info_iterator_t which you can use to iterate over the structure: xcb_input_xi_device_info_next does the necessary parsing and moves the pointer so that each time it is run the iterator points to the next element.

Since replies in the X protocol can have variable-length elements, e.g. device names, XCB also provides wrappers to make accessing them easier, like xcb_input_xi_device_info_name.

Most of the code of XCB is generated: there is an XML description of the X protocol which is used in the build process, and the C code to parse and generate the X protocol packets is generated each time the library is built. This means, unfortunately, that the documentation is quite useless, and there aren’t many examples online, especially if you’re going to use rarely used functions like XInput hierarchy change events.

I decided to do the porting the hard way, changing Xlib calls to XCB calls one by one, but there’s an easier way: since Xlib is now actually based on XCB, you can #include <X11/Xlib-xcb.h> and use XGetXCBConnection to get an XCB connection object corresponding to the Xlib’s Display object. Doing that means there will still be a single X connection, and you will be able to mix Xlib and XCB calls.

When porting, it often is useful to have a look at the sources of Xlib: it becomes obvious what XCB functions to use when you know what Xlib does internally (thanks to Mike Gabriel for pointing this out!).

Another thing to remember is that the constants and enums Xlib and XCB define usually have the same values (mandated by the X protocol) despite having slightly different names, so you can mix them too. For example, since inputplug passes the XInput event names to the command it runs, I decided to keep the names as Xlib defines them, and since I’m creating the corresponding strings by using a C preprocessor macro, it was easier for me to keep using XInput2.h instead of defining those strings by hand.

If you’re interested in the result of this porting effort, have a look at the code in the Mercurial repo. Unfortunately, it cannot be packaged for Debian yet since the Debian package for XCB doesn’t ship the module for XInput (see bug #733227).

P.S. Thanks again to Mike Gabriel for providing me important help — and explaining where to look for more of it ;)

by Andrej Shadura at May 20, 2018 08:05 AM

May 18, 2018

Andrew Shadura

Goodbye Octopress, hello Pelican

Hi from MiniDebConf in Hamburg!

As you may have noticed, I don’t update this blog often. One of the reasons why this was happening was that until now it was incredibly difficult to write posts. The software I used, Octopress (based on Jekyll) was based on Ruby, and it required quite specific versions of its dependencies. I had the workspace deployed on one of my old laptops, but when I attempted to reproduce it on the laptop I currently use, I failed to. Some dependencies could not be installed, others failed, and my Ruby skills weren’t enough to fix that mess. (I have to admit my Ruby skills improved insignificantly since the time I installed Octopress, but that wasn’t enough to help in this case.)

I’ve spent some time during this DebCamp to migrate to Pelican, which is written in Python, packaged in Debian, and its dependencies are quite straighforward to install. I had to install (and write) a few plugins to make the migration easier, and port my custom Octopress Bootstrap theme to Pelican.

I no longer include any scripts from Twitter or Facebook (I made Tweet and Share button static links), and the Disqus comments are loaded only on demand, so reading this blog will respect your privacy better than before.

See you at MiniDebConf tomorrow!

by Andrej Shadura at May 18, 2018 08:05 PM

May 14, 2018

Erik Faye-Lund

Fuse Open

OK, so the announcement that I hinted at in my previous post is now public, so I can finally talk about this.

What

I’m very happy to tell you that Fuse is now fully open source software! :tada:

For those who don’t know what I’m talking about: Fuse is a mobile app development environment, developed by Fusetools. You can read more about it here.

This work represents more than 3.5 years of my life (and of course lots of other people, but you know, this is my blog :smirk:), and I’ve been pushing hard inside the company my whole time there to open source it. We already open sourced fuselibs about a year ago, and as far as I’m concerned, that was a success.

Now, there’s some sad news that comes with this as well. Fusetools will no longer be working on Fuse. That’s now up to the community. The company is switching focus to an app-as-a-service business model.

This also marks the end of my time at Fuse. I’ll wrap up my end of Fuse 1.9 and then take a few weeks of vacation, before I start my next job. What I’ll be doing next is very exciting, but deserve its own post.

Why

There’s many reasons why we ended up where we are now. I’ll try to explain what I believe are the major reasons below.

Please note that the opinions stated here are my personal ones, and not those of my employer. I’m writing this article to shed some light on what I believe is the reasons we’ve come to where we are, not to blame anyone, but to try to learn from our mistakes.

First and foremost, we failed to build a viable business on selling development tools. Developing a full platform, comprising of compilers, standard libraries, editor-plugins, debugging tools, IDEs etc is a big undertaking, and cost a lot of money, while the marked is full of free tools that does a pretty decent job. To convince someone to pay for something they can get for free somewhere else, you need to be a lot better.

Secondly, we’ve been taking a lot of time-consuming detours. We’ve started projects that, in my opinion, should never have been started. We should have kept our main focus enabling our users to do more, not to do more for them.

Third, I sadly think we failed to be as good as we needed to. This one is a bit touchy, but I’ll try to explain. I’m not saying I think we did bad work, far from it. But to make up for the first point, we would have needed to be vastly better than the competition. But due to the second point, the competition largely caught up to us.

What could we have done differently?

I think the problem we set out to solve was a hard one, so the short answer is “beats me”… :confused:

However, I believe that our chances would at least have been better if we open sourced the platform from the beginning.

The reason is that we might not have had to build the platform all by ourselves. If we would for instance made our package-manager usable for shipping 3rd-party packages, we could have leveraged the community more at more of the high-level work. Our company could have focused on functionality that enabled the community. Instead we ended up doing a lot of very high-level work that didn’t end up benefiting a lot of users.

I also believe that if we made the Uno compiler a stand-alone C# to C++ transpiler, and did all the graphics functionality and app-bootstrapping as library functionality instead of compiler-internals, the compiler might have been useful for other C#-oriented projects, and we could have aligned closer with the C# community than we ended up. Instead, we treated Uno as an internal tool that app-developers weren’t really supposed to care about.

I’m far from sure that those changes together would have been enough, though. This was a hard problem to solve, and that was exactly why I was interested in this gig to begin with.

The future

Well, I hope that the future of Fuse is bright, but different than before. Thanks to Fusetools releasing all of the code, I do believe that the project will live on, with a better future that isn’t meddled with short-term wins that hurt in the long run.

I plan on continuing at least some of my work on Uno / Fuselibs, and I’m not the only one. In addition, there’s also still some commercial interest for Fuse, just not from Fusetools. Anything in this area is still not announced, but expect to hear more.

I still believe the technology is good, and hopefully now that it’s finally all out there for anyone to play with, we can get some great work done! Fuse has a unique tech-stack, with a declarative UI engine that is built around preview / live-refresh, and a comprehensive toolkit. The turn-around time for experimenting is still miles beyond the competition. :sparkles:

by Erik Faye-Lund at May 14, 2018 08:52 PM

May 13, 2018

Erik Faye-Lund

Hello World!

Welcome to my new blog, where I’ll be writing about my open source development.

This is just a short announcement post, to introduce this blog.

If you don’t already know who I am, I am Erik “kusma” Faye-Lund, a norwegian graphics programmer.

I’m currently working on the following projects, which I expect to be posting about:

  • Fuselibs: This is the core UI engine in Fuse, where I currently work.
  • Rocket: This is a tool for tweaking synchronization of audio and visuals in demoscene productions.
  • Grate: This is a reverse-engineered GPU driver for the GeForce ULP in the NVIDIA Tegra 2/3/4 SoCs (code-name: AR20 / Aurora).
  • Mesa 3D: This is what Grate is being built on top, and the goal of Grate is ultimately to upstream a driver into Mesa.

In addition to this, there’s some cool new announcements coming up very soon! So stay tuned!

by Erik Faye-Lund at May 13, 2018 09:36 AM

April 09, 2018

Lucas Kanashiro

Migrating PET features to distro-tracker

After joining the Debian Perl Team some time ago, PET has helped me a lot to find work to do in the team context, and also helped the whole team in our workflow. For those who do not know what PET is: “a collection of scripts that gather information about your (or your group’s) packages. It allows you to see in a bird’s eye view the health of hundreds of packages, instantly realizing where work is needed.”. PET became an important project since about 20 Debian teams were using it, including Perl and Ruby teams in which I am more active.

In Cape Town, during the DebConf16, I had a conversation with Raphael Hertzog about the possibility to migrate PET features to distro-tracker. He is one of the distro-tracker maintainers, and we found some similarities between those tools. Altough, after that I did not have enough time to push it forward. However, after the migration from Alioth to Salsa PET became almost unuseful because a lot of things were done based on Alioth. This brought me the motivation to get this migration idea off the drawing board, and support the PET features in distro-tracker team’s visualization.

In the meantime, the Debian Outreach team published a GSoC call for mentors for this year. I was a Debian GSoC student in 2014 and 2015, and this was a great opportunity for me to join the community. With that in mind and the wish to give this opportunity to others, I decided to become a mentor this year and proposed a project to implement the PET features in distro-tracker, called Improving distro-tracker to better support Debian Teams. We are at the selection students phase and I received great proposals. I am looking forward to the start of the program and finally have the PET features available in tracker.debian.org. And of course, bring new blood to the Debian Project, since this is the idea behind those outreach programs.

April 09, 2018 01:30 PM

March 10, 2018

Andrew Shadura

Say no to Slack, say yes to Matrix

Of all proprietary chatting systems, Slack has always seemed one of the worst to me. Not only it’s a closed proprietary system with no sane clients, open source or not, but it not just one walled garden, as Facebook or WhatsApp are, but a constellation of walled gardens, isolated from each other. To be able to participate in multiple Slack communities, the user has to create multiple accounts and keep multiple chat windows open all the time. Federation? Self-hosting? Owning your data? All of those are not a thing in Slack. Until recently, it was possible to at least keep the logs of all conversations locally by connecting to the chat using IRC or XMPP if the gateway was enabled.

Now, with Slack shutting down gateways not only you cannot keep the logs on your computer, you also cannot use a client of your choice to connect to Slack. They also began changing the bots API which was likely the reason the Matrix-to-Slack gateway didn’t work properly at times. The issue has since resolved itself, but Slack doesn’t give any guarantees the gateway will continue working, and obviously they aren’t really interested in keeping it working.

So, following Gunnar Wolf’s advice (consider also reading this article by Megan Squire), I recommend you stop using Slack. If you prefer an isolated chat system with features Slack provides, and you can self-host, consider Mattermost or Rocket.Chat. Both seem to provide more or less the same features as Slack, but don’t lock you in, and you can choose to either use their paid cloud offering, or run it on your own server. We’ve been using Mattermost at Collabora since July last year, and while it’s not perfect, it’s not a bad piece of software.

If you woulde prefer a system you can federate, you may be interested to have a look at Matrix. Matrix is an open decentralised protocol and ecosystem, which architecturally looks similar to XMPP, but uses different technologies and offers a richer and more modern baseline, including VoIP, end-to-end encryption, decentralised history and content storage, easy bot integration and more. The web client for Matrix, Riot is comparable to Slack, but unlike Slack, there are more clients you can use, including Weechat, libpurple, a bunch of Qt-based clients and, importantly, Riot for Android and iOS.

You don’t have to self-host a Matrix homeserver, since Matrix.org runs one you can use, but it’s quite easy to run one if you decide to, and you don’t even have to migrate your existing chats — you just join them from accounts on your own homeserver, and that’s it!

To help you with the decision to move from Slack to Matrix, you should know that since Matrix has a Slack gateway, you can gradually migrate your colleagues to the new infrastructure, by joining the Slack and Matrix chats together, and dropping the gateway only when everyone moves from Slack.

Repeating Gunnar, say no to predatory tactics. Say no to Embrace, Extend and Extinguish. Say no to Slack.

by Andrej Shadura at March 10, 2018 01:50 PM

Say no to Slack, say yes to Matrix

Of all proprietary chatting systems, Slack has always seemed one of the worst to me. Not only it’s a closed proprietary system with no sane clients, open source or not, but it not just one walled garden, as Facebook or WhatsApp are, but a constellation of walled gardens, isolated from each other. To be able to participate in multiple Slack communities, the user has to create multiple accounts and keep multiple chat windows open all the time. Federation? Self-hosting? Owning your data? All of those are not a thing in Slack. Until recently, it was possible to at least keep the logs of all conversations locally by connecting to the chat using IRC or XMPP if the gateway was enabled.

Now, with Slack shutting down gateways not only you cannot keep the logs on your computer, you also cannot use a client of your choice to connect to Slack. They also began changing the bots API which was likely the reason the Matrix-to-Slack gateway didn’t work properly at times. The issue has since resolved itself, but Slack doesn’t give any guarantees the gateway will continue working, and obviously they aren’t really interested in keeping it working.

So, following Gunnar Wolf’s advice (consider also reading this article by Megan Squire), I recommend you stop using Slack. If you prefer an isolated chat system with features Slack provides, and you can self-host, consider MatterMost or Rocket.Chat. Both seem to provide more or less the same features as Slack, but don’t lock you in, and you can choose to either use their paid cloud offering, or run it on your own server. We’ve been using MatterMost at Collabora since July last year, and while it’s not perfect, it’s not a bad piece of software.

If you woulde prefer a system you can federate, you may be interested to have a look at Matrix. Matrix is an open decentralised protocol and ecosystem, which architecturally looks similar to XMPP, but uses different technologies and offers a richer and more modern baseline, including VoIP, end-to-end encryption, decentralised history and content storage, easy bot integration and more. The web client for Matrix, Riot is comparable to Slack, but unlike Slack, there are more clients you can use, including Weechat, libpurple, a bunch of Qt-based clients and, importantly, Riot for Android and iOS.

You don’t have to self-host a Matrix homeserver, since Matrix.org runs one you can use, but it’s quite easy to run one if you decide to, and you don’t even have to migrate your existing chats — you just join them from accounts on your own homeserver, and that’s it!

To help you with the decision to move from Slack to Matrix, you should know that since Matrix has a Slack gateway, you can gradually migrate your colleagues to the new infrastructure, by joining the Slack and Matrix chats together, and dropping the gateway only when everyone moves from Slack.

Repeating Gunnar, say no to predatory tactics. Say no to Embrace, Extend and Extinguish. Say no to Slack.

March 10, 2018 01:50 PM

February 23, 2018

Andrew Shadura

How to stop gnome-settings-daemon messing with keyboard layouts

In case you, just like me, want to have a heavily customised keyboard layout configuration, possibly with different layouts on different input devices (I recommend inputplug to make that work), you probably don’t want your desktop environment to mess with your settings or, worse, re-set them to some default from time to time. Unfortunately, that’s exactly what gnome-settings-daemon does by default in GNOME and Unity. While I could modify inputplug to detect that and undo the changes immediately, it turned out this behaviour can be disabled with an underdocumented option:

gsettings set org.gnome.settings-daemon.plugins.keyboard active false

Thanks to Sebastien Bacher for helping me with this two years ago.

by Andrej Shadura at February 23, 2018 03:23 PM

How to stop gnome-settings-daemon messing with keyboard layouts

In case you, just like me, want to have a heavily customised keyboard layout configuration, possibly with different layouts on different input devices (I recommend inputplug to make that work), you probably don’t want your desktop environment to mess with your settings or, worse, re-set them to some default from time to time. Unfortunately, that’s exactly what gnome-settings-daemon does by default in GNOME and Unity. While I could modify inputplug to detect that and undo the changes immediately, it turned out this behaviour can be disabled with an underdocumented option:

gsettings set org.gnome.settings-daemon.plugins.keyboard active false

Thanks to Sebastien Bacher for helping me with this two years ago.

February 23, 2018 03:23 PM

February 09, 2018

memcpy.io - Robert Foss

Virtualizing GPU Access

For the past few years a clear trend of containerization of applications and services has emerged. Having processes containerized is beneficial in a number of ways. It both improves portability and strengthens security, and if done properly the performance penalty can be low.

In order to further improve security containers are commonly run in virtualized environments. This provides some new challenges in terms of supporting the accelerated graphics usecase.

OpenGL ES implementation

Currently Collabora and Google are implementing OpenGL ES 2.0 support. OpenGL ES 2.0 is the lowest common denominator for many mobile platforms and as such is a requirement for Virgil3D to be viable on the those platforms.

That is is the motivation for making Virgil3D work on OpenGL ES hosts.

How does this work?

This stack is commonly referred to as Virgil3D, since all of the parts originated from a project with that name.

Alt text

There are a few parts to this implementation. QEMU, virglrenderer and virtio-gpu. They way it works is by letting the guest applications speak unmodified OpenGL to the Mesa. But instead of Mesa handing commands over to the hardware it is channeled through virtio-gpu on the guest to QEMU on the host.

QEMU then receives the raw graphics stack state (Gallium state) and interprets it using virglrenderer from the raw state into an OpenGL form, which can be executed as entirely normal OpenGL on the host machine.

The host OpenGL stack does not even have to be Mesa, and could for example be the proprietary nvidia stack.

Trying it out

Environment

First of all, let's have a look at the development environment. When doing graphical development I find it quite helpful to set up a parallel graphics stack in order to not pollute or depend on the stack of the host machine more than we have to.

function concatenate_colon {
  local IFS=':'
  echo "$*"
}

function add_export_env {
  local VAR="$1"
  shift
  local VAL=$(eval echo "\$$VAR")
  if [ "$VAL" ]; then
    VAL=$(concatenate_colon "$@" "$VAL");
  else
    VAL=$(concatenate_colon "$@");
  fi
  eval "export $VAR=\"$VAL\""
}

function prefix_setup {
  local PREFIX="$1"

  add_export_env PATH "$PREFIX/bin"
  add_export_env LD_LIBRARY_PATH "$PREFIX/lib"
  add_export_env PKG_CONFIG_PATH "$PREFIX/lib/pkgconfig/" "$PREFIX/share/pkgconfig/"
  add_export_env MANPATH "$PREFIX/share/man"
  export ACLOCAL_PATH="$PREFIX/share/aclocal"
  mkdir -p "$ACLOCAL_PATH"
  export ACLOCAL="aclocal -I $ACLOCAL_PATH"
}

function projectshell {
  case "$1" in
    virgl | virglrenderer)
        export ALT_LOCAL="/opt/local/virgl"
        mkdir -p "$ALT_LOCAL"
        prefix_setup "$ALT_LOCAL"
        ;;
}

The above snippet is something that I would put in my .bashrc or .zshrc. Don't forget so run source ~/.bashrc or the equivalent after making changes.

To enter the environment I simply type projectshell virgl.

Build libepoxy

libepoxy is a library for managing OpenGL function pointers for you. And it is a dependency of virglrenderer, which we'll get to below.

git clone https://github.com/anholt/libepoxy.git
cd libepoxy
./autogen.sh --prefix=$ALT_LOCAL
make -j$(nproc --ignore=1)
make install

Build virglrenderer

Virgilrenderer is the component that QEMU uses to provide accelerated rendering. It receives Gallium states from the guest kernel via its virtio-gpu interface, which are then translated into OpenGL on the host. It also translates shaders from the TGSI format used by Gallium into the GLSL format used by OpenGL.

git clone git://anongit.freedesktop.org/virglrenderer
cd virglrenderer
./autogen.sh --prefix=$ALT_LOCAL
make -j$(nproc --ignore=1)
make install

Build Mesa

# Fetch dependencies
sudo sed -i 's/\#[ ]*deb-src/deb-src/' /etc/apt/sources.list
sudo apt update
sudo apt-get build-dep mesa

# Actually build Mesa
git clone https://anongit.freedesktop.org/git/mesa/mesa.git
cd mesa
./autogen.sh \
    --prefix=$ALT_LOCAL \
    --enable-driglx-direct \
    --enable-gles1 \
    --enable-gles2 \
    --enable-glx-tls \
    --enable-texture-float \
    --with-platforms=drm,x11,wayland \
    --with-dri-drivers=i915,i965,nouveau \
    --with-gallium-drivers=nouveau,swrast,radeonsi,virgl \
    --without-vulkan-drivers
make -j$(nproc --ignore=1)
make install

Build QEMU

git clone git://git.qemu.org/qemu.git
cd qemu
./configure \
    --prefix=$ALT_LOCAL \
    --target-list=x86_64-softmmu \
    --enable-gtk \
    --with-gtkabi=3.0 \
    --enable-kvm \
    --enable-spice \
    --enable-usb-redir \
    --enable-libusb \
    --enable-opengl \
    --enable-virglrenderer
make -j$(nproc --ignore=1)
make install

Set up a VM

As a guest we're going to use Ubuntu 17.10, but just use the latest release of whatever distro you like. The kernel has to have been built with the appropriate virtio-gpu Kconfig options though.

wget http://releases.ubuntu.com/17.10/ubuntu-17.10.1-server-amd64.iso
qemu-img create -f qcow2 ubuntu.qcow2 35G
qemu-system-x86_64 \
    -enable-kvm -M q35 -smp 2 -m 4G \
    -hda ubuntu.qcow2 \
    -net nic,model=virtio \
    -net user,hostfwd=tcp::2222-:22 \
    -vga virtio \
    -display sdl,gl=on \
    -boot d -cdrom ubuntu-17.10.1-desktop-amd64.iso

Run VM

qemu-system-x86_64 \
    -enable-kvm -M q35 -smp 2 -m 4G \
    -hda ubuntu.qcow2 \
    -net nic,model=virtio \
    -net user,hostfwd=tcp::2222-:22 \
    -vga virtio \
    -display sdl,gl=on

Et Voila! Your guest should now have GPU acceleration!

Conclusion

Hopefully this guide will have helped you to build all of the software needed to set up your very own virglrenderer enabled graphics stack.

This post has been a part of work undertaken by my employer Collabora.

by Robert Foss at February 09, 2018 10:17 AM

December 27, 2017

Xavier Claessens

State of Meson in GLib/GStreamer

During the last couple of months I’ve been learning the Meson build system. Since my personal interests in Open Source Software are around GLib and GStreamer, and they both have Meson and Autotools build systems in parallel, I’ve set as personal goal to list (and try to fix) blocker bugs preventing from switching them to Meson-only. Note that I’m neither GLib nor GStreamer maintainer, so it’s not my call whether or not they will drop Autotools.

GLIB

I opened bug #790954, a meta-bug depending on all the bugs related to meson I’ve found and are bad enough that we cannot drop Autotools unless we fix them first. Amount those bugs, I’ve been personally working on those:

Bug 788773 meson does not install correct pc files

pkg-config files for glib/gobject/gio are currently generated from a .pc.in template but some flags (notably -pthread) are hidden internally in Meson and cannot be written in the .pc file. To fix that bug I had to use Meson’s pkg-config generator which has access to those internal compiler flags. For example the -pthread flag is hidden behind the dependency(‘threads’) object in a meson.build file. Unfortunately such object cannot be passed to the pkg-config generator, so I opened a pull request making that generator smarter:

  • When generating a pc file for a library, it automatically take all dependencies and link_with values from that library and add them into the Libs.private and Requires.private fields.
  • Extra dependencies (such as ‘threads’) can be added explicitly if needed. One common case is explicitly adding a public dependency that the generator would have added in private otherwise.

Bug 786796 gtk-doc build fails with meson

Pretty easy one: gobject’s API documentation needs to pass an extra header file when compiling gtkdoc-scangobj from a non standard location. I made a patch to have include_directories argument to gnome.gtkdoc() method and use it in glib.

Bug 790837 Meson: missing many configure options

Compared to Autotools’ configure, glib’s meson build system was missing many build options. Also existing options were not following the GNOME guideline. tl;dr: we want foo_bar instead of enable-foo-bar, and we want to avoid automatic options as much as possible.

Now configure options are on par with Autotools, except for the missing –with-thread which has a patch pending on bug #784995.

GStreamer

Both static and shared library

For GStreamer static builds are important. The number of shared libraries an Android application can link to is limited, and dlopen of plugins is forbidden on IOS (if I understood correctly). On those platforms GStreamer is built as one big shared library that statically link all its dependencies (e.g. glib).

Autotools is capable of generating both static and shared libraries and compile C files only once. Doing so with Meson is possible but requires unnecessary extra work. I created a pull request that adds both_library() method to meson, and add a global project option that turns all library() calls to build both shared and static.

Static build of gio modules

This one is not directly related to Meson, but while working on static builds, I’ve noticed that GStreamer patched glib-networking to be able to static build them. Their patch never made it upstream and it has one big downside: it needs to be built twice for static and dynamic. GStreamer itself recently fixed their plugins  ABI to be able to do a single compile and produce both shared and static libraries.

The crux is you cannot have the same symbol defined in every plugin. Currently GIO modules must all define g_io_module_load/unload/query() symbols which would clash if you try to static link more than one GIO module. I wrote patches for gio and glib-networking to rename those symbols to be unique. The symbol name is derived from the shared module filename. For example when gio loads libgiognutls.so extension it will remove “libgio” prefix and “.so” suffix to get “gnutls” plugin name. Then lookup for g_io_gnutls_load/unload/query() symbols instead (and fallback to old names if not found).

The second difficulty is GIO plugins uses G_DEFINE_DYNAMIC_TYPE which needs  a GTypeModule to be able to create its GType. When those plugins are static linked we don’t have any GTypeModule object. I made a patch to allow passing NULL in that case to turn a G_DEFINE_DYNAMIC_TYPE into a static GType.

Bug 733067 cerbero: support python3

Meson being python3-only and Cerbero python2-only, if we start building meson projects in cerbero it means we require installing both pythons too. It also adds problems with PYTHONPATH environment variable because it cannot differentiate between 2 and 3 (seriously why is there no PYTHONPATH3?).

I ran 2to3 script against the whole cerbero codebase and then fixed a few remaining bugs manually. All in all it was pretty easy, the most difficult part is to actually test all build variants (linux, osx, windows, cross-android, cross-windows). it’s waiting for 1.14 to be released before merging this into master.

Bug 789316 Add Meson support in cerbero

Cerbero already had a recipe to build meson and ninja but they were broken. I made patches to fix that and also add the needed code to be able to build recipes using meson. It also makes use of meson directly to build gst-transcoder instead of the wrapper configure and makefile it ships. Later more recipes will be able to be converted to Meson (e.g. glib). Blocking on the python3 port of cerbero.

Choose between static or shared library

One use case Olivier Crête described on Meson issue #2765 is he wants to make the smallest possible build of a GStreamer application, for IoT. That means static link everything into the executable (e.g. GStreamer, glib) but dynamic link on a few libraries provided by the platform (e.g. glibc, openssl).

With Autotools he was doing that using .la files: Libraries built inside cerbero has a “.la” file, and libraries provided by the platform don’t. Autotools has a mode to static link the former and dynamic link the latter, meson don’t.

I think the fundamental question in meson is what to do when a dependency can be provided by both a static and a shared library. Currently meson takes the decision for you and always use the shared library, unless you explicitly set static: true when you declare your dependency, with no project-wide switch.

In this pull request I fixed this by added 2 global options:

  • default_link: Tells whether we prefer static or shared when both are available.
  • static_paths: List of path prefixes where it is allowed to use static
    libraries. By default it has “/” which means all paths are allowed. It can be set to the path where cerbero built GStreamer (e.g. /home/…)  and it will static link only them, using shared library from /usr/lib for libraries not built within cerbero.

Conclusion

There is a long road ahead before getting meson build system on par with Autotools for GLib and GStreamer. But bugs and missing features are relatively easy to fix. Meson code base is easy and pleasant to hack, unlike m4 macros I’ve never understood in the past 10 years I’ve been writing Autotools projects.

I think droping Autotools from GLib is a key milestone. If we can achieve that, it proves that all weird use-cases people has been relying on can be done with Meson.

I’ve been working on this on my personal time and on Collabora’s “2h/week for personal projects” policy. I’ll continue working on that goal when possible.

by xclaesse at December 27, 2017 09:26 PM

December 22, 2017

Gustavo Noronha Silva

CEF on Wayland

TL;DR: we have patches for CEF to enable its usage on Wayland and X11 through the Mus/Ozone infrastructure that is to become Chromium’s streamlined future. And also for Content Shell!

At Collabora we recently assisted a customer who wanted to upgrade their system from X11 to Wayland. The problem: they use CEF as a runtime for web applications and CEF was not Wayland-ready. They also wanted to have something which was as future-proof and as upstreamable as possible, so the Chromium team’s plans were quite relevant.

Chromium is at the same time very modular and quite monolithic. It supports several platforms and has slightly different code paths in each, while at the same time acting as a desktop shell for Chromium OS. To make it even more complex, the Chromium team is constantly rewriting bits or doing major refactorings.

That means you’ll often find several different and incompatible ways of doing something in the code base. You will usually not find clear and stable interfaces, which is where tools like CEF come in, to provide some stability to users of the framework. CEF neutralizes some of the instability, providing a more stable API.

So we started by looking at 1) where is Chromium headed and 2) what kind of integration CEF needed with Chromium’s guts to work with Wayland? We quickly found that the Chromium team is trying to streamline some of the infrastructure so that it can be better shared among the several use cases, reducing duplication and complexity.

That’s where the mus+ash (pronounced “mustache”) project comes in. It wants to make a better split of the window management and shell functionalities of Chrome OS from the browser while at the same time replacing obsolete IPC systems with Mojo. That should allow a lot more code sharing with the “Linux Desktop” version. It also meant that we needed to get CEF to talk Mus.

Chromium already has Wayland support that was built by Intel a while ago for the Ozone display platform abstraction layer. More recently, the ozone-wayland-dev branch was started by our friends at Igalia to integrate that work with mus+ash, implementing the necessary Mus and Mojo interfaces, window decorations, menus and so on. That looked like the right base to use for our CEF changes.

It took quite a bit of effort and several Collaborans participated in the effort, but we eventually managed to convince CEF to properly start the necessary processes and set them up for running with Mus and Ozone. Then we moved on to make the use cases our customer cared about stable and to port their internal runtime code.

We contributed touch support for the Wayland Ozone backend, which we are in the process of upstreaming, reported a few bugs on the Mus/Ozone integration, and did some debugging for others, which we still need to figure out better fixes for.

For instance, the way Wayland fd polling works does not integrate nicely with the Chromium run loop, since there needs to be some locking involved. If you don’t lock/unlock the display for polling, you may end up in a situation in which you’re told there is something to read and before you actually do the read the GL stack may do it in another thread, causing your blocking read to hang forever (or until there is something to read, like a mouse move). As a work-around, we avoided the Chromium run loop entirely for Wayland polling.

More recently, we have start working on an internal project for adding Mus/Ozone support to Content Shell, which is a test shell simpler than Chromium the browser. We think it will be useful as a test bed for future work that uses Mus/Ozone and the content API but not the browser UI, since it lives inside the Chromium code base. We are looking forward to upstreaming it soon!

PS: if you want to build it and try it out, here are some instructions:

# Check out Google build tools and put them on the path
$ git clone https://chromium.googlesource.com/a/chromium/tools/depot_tools.git
$ export PATH=$PATH:`pwd`/depot_tools

# Check out chromium; note the 'src' after the git command, it is important
$ mkdir chromium; cd chromium
$ git clone -b cef-wayland https://gitlab.collabora.com/web/chromium.git src
$ gclient sync  --jobs 16 --with_branch_heads

# To use CEF, download it and look at or use the script we put in the repository
$ cd src # cef goes inside the chromium source tree
$ git clone -b cef-wayland https://gitlab.collabora.com/web/cef.git
$ sh ./cef/build.sh # NOTE: you may need to edit this script to adapt to your directory structure
$ out/Release_GN_x64/cefsimple --mus --use-views

# To build Content Shell you do not need to download CEF, just switch to the branch and build
$ cd src
$ git checkout -b content_shell_mus_support origin/content_shell_mus_support
$ gn args out/Default --args="use_ozone=true enable_mus=true use_xkbcommon=true"
$ ninja -C out/Default content_shell
$ ./out/Default/content_shell --mus --ozone-platform=wayland

by kov at December 22, 2017 11:25 AM

November 28, 2017

memcpy.io - Robert Foss

Building ChromiumOS for Qemu

Alt text

So let's start off by covering how ChromiumOS relates to ChromeOS. The ChromiumOS project is essentially ChromeOS minus branding and some packages for things like the media digital restrictions management.

But on the whole, almost everything is there, and the pieces that aren't, you don't need.

ChromiumOS

Depot tools

In order to check out ChromiumOS and other large Google projects, you'll need depot tools.

git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git
export PATH=$PATH:$(PWD)/depot_tools

Maybe you'd want to add the PATH export to your .bashrc.

Building ChromiumOS

mkdir chromiumos
cd chromiumos
repo init -u https://chromium.googlesource.com/chromiumos/manifest.git --repo-url https://chromium.googlesource.com/external/repo.git [-g minilayout]
repo sync -j75
cros_sdk
export BOARD=amd64-generic
./setup_board --board=${BOARD}
./build_packages --board=${BOARD}
./build_image --board=${BOARD} --boot_args "earlyprintk=serial,keep console=tty0" --noenable_rootfs_verification test
./image_to_vm.sh --board=${BOARD} --test_image

How to (not) boot ChromiumOS

So, this is a command baked into ChromiumOS using the cros_start_vm command, but at least on my machine it does not seem to boot properly. I have as of yet not been able to get any graphical output (over VNC).

cros_sdk
./bin/cros_start_vm --image_path=../build/images/${BOARD}/latest/chromiumos_qemu_image.bin --board=${BOARD}

Running Qemu ourselves

So if the intended tools don't work, we'll just have to roll up our sleeves and do it ourselves. This is how I got ChromiumOS booting.

Install build dependencies

These dependencies were available on Ubuntu 17.10, some alternative packages might be needed for your distributions.

sudo apt install autoconf libaio-dev libbluetooth-dev libbrlapi-dev libbz2-dev libcap-dev libcap-ng-dev libcurl4-gnutls-dev libepoxy-dev libfdt-dev libgbm-dev libgles2-mesa-dev libglib2.0-dev libgtk-3-dev libibverbs-dev libjpeg8-dev liblzo2-dev libncurses5-dev libnuma-dev librbd-dev librdmacm-dev libsasl2-dev libsdl1.2-dev libsdl2-dev libseccomp-dev libsnappy-dev libssh2-1-dev libspice-server-dev libspice-server1 libtool libusb-1.0-0 libusb-1.0-0-dev libvde-dev libvdeplug-dev libvte-dev libxen-dev valgrind xfslibs-dev xutils-dev zlib1g-dev libusbredirhost-dev usbredirserver

Virglrenderer

Virglrenderer creates a virtual 3D GPU, that allows the Qemu guest to use the graphics capabilities of the host machine.

This step is optional, but allows for hardware accelerated OpenGL support on the guest system. If you don't want to use Virgl, remove it from the Qemu configure step and the Qemu runtime flags.

git clone git://git.freedesktop.org/git/virglrenderer
cd virglrenderer
./autogen.sh
make -j7
sudo make install

Qemu

Qemu is a full system emulator, and supports a multitude of machine architectures. We're going to to use x86_64.

git clone git://git.qemu-project.org/qemu.git
mkdir -p qemu/build
cd qemu/build
../configure --target-list=x86_64-softmmu --enable-gtk --with-gtkabi=3.0 --enable-kvm --enable-spice --enable-usb-redir --enable-libusb --enable-virglrenderer --enable-opengl
make -j7
sudo make install

Run image

Now you can boot the image using Qemu.

Note that running Qemu with the virtio options requires that your host machine is running a Linux kernel which was built with the kconfig options CONFIG_DRM_VIRTIO, CONFIG_VIRT_DRIVERS and CONFIG_VIRTIO_XXXX.

cd chromiumos
/usr/local/bin/qemu-system-x86_64 \
    -enable-kvm \
    -m 2G \
    -smp 4 \
    -hda src/build/images/amd64-generic/latest/chromiumos_qemu_image.bin \
    -vga virtio \
    -net nic,model=virtio \
    -net user,hostfwd=tcp:127.0.0.1:9222-:22 \
    -usb -usbdevice keyboard \
    -usbdevice mouse \
    -device virtio-gpu-pci,virgl \
    -display gtk,gl=on

Conclusion

Hopefully this guide will have helped you to build all of the software needed to boot your very own ChromiumOS.

This post has been a part of work undertaken by my employer Collabora.

by Robert Foss at November 28, 2017 10:32 AM

November 17, 2017

George Kiagiadakis

ipcpipeline: Splitting a GStreamer pipeline into multiple processes

Earlier this year I worked on a certain GStreamer plugin that is called “ipcpipeline”. This plugin provides elements that make it possible to interconnect GStreamer pipelines that run in different processes.  In this blog post I am going to explain how this plugin works and the reason why you might want to use it in your application.

Why ipcpipeline?

In GStreamer, pipelines are meant to be built and run inside a single process. Normally one wouldn’t even think about involving multiple processes for a single pipeline. You can (and should) involve multiple threads, of course, which is easily done using the queue element, in order to do parallel processing. But since you can involve multiple threads, why would you want to involve multiple processes as well?

Splitting part of a pipeline to a different process is useful when there is one or more elements that need to be isolated for security reasons. Imagine the case where you have an application that uses a hardware video decoder and therefore has device access privileges. Also imagine that in the same pipeline you have elements that download and parse video content directly from a network server, like most Video On Demand applications would do. Although I don’t mean to say that GStreamer is not secure, it can be a good idea to think ahead and make it as hard as possible for an attacker to take advantage of potential security flaws. In theory, maybe someone could exploit a bug in the container parser by sending it crafted data from a fake server and then take control of other things by exploiting those device access privileges, or cause a system crash. ipcpipeline could help to prevent that.

How does it work?

In the – oversimplified – diagram below we can see how the media pipeline in a video player would look like with GStreamer:

image.YNGV9Y.png

With ipcpipeline, this pipeline can be split into two processes, like this:

image.WCEG9Y.png

As you can see, the split mainly involves 2 elements: ipcpipelinesink, which serves as the sink for the first pipeline, and ipcpipelinesrc, which serves as the source for the second pipeline. These two elements internally talk to each other through a unix pipe or socket, transferring buffers, events, queries and messages over this socket, thus linking the two pipelines together.

This mechanism doesn’t look very special, though. You might be wondering at this point, what is the difference between using ipcpipeline and some other existing mechanism like a pair of fdsink/fdsrc or udpsink/udpsrc or RTP? What is special about these elements is that the two pipelines behave as if they were a single pipeline, with the elements of the second one being part of a GstBin in the first one:

image.9EBV9Y.png

The diagram above illustrates how you can think of a pipeline that uses the ipcpipeline mechanism. As you can see, ipcpipelinesink behaves as a GstBin that contains the whole remote pipeline. This practically means that whenever you change the state of ipcpipelinesink, the remote pipeline’s state changes as well. It also means that all messages, events and queries that make sense are forwarded from one pipeline to the other, trying to implement as closely as possible the behavior that a GstBin would have.

This design practically allows you to modify an existing application to use this split-pipeline mechanism without having to change the pipeline control logic or implement your own IPC for controlling the second pipeline. It is all integrated in the mechanism already.

ipcpipeline follows a master-slave design. The pipeline that controls the state changes of the other pipeline is called the “master”, while the other one is called the “slave”. In the above example, the pipeline that contains the ipcpipelinesink element is the “master”, while the other one is the “slave”. At the moment of writing, the opposite setup is not implemented, so it’s always the downstream part of the pipeline that can be slaved and ipcpipelinesink is always the “master”.

While it is possible to have only one “master” pipeline, it is possible to have multiple “slave” ones. This allows, for example, to split an audio decoder and a video decoder into different processes:

ipcpipeline-1.png

It is also possible to have multiple ipcpipelinesink elements connect to the same slave pipeline. In this case, the slave pipeline will follow the state that is closest to PLAYING between the two states that it will get from the two ipcpipelinesinks. Also, messages from the slave pipeline will only be forwarded through one of the two ipcpipelinesinks, so you will not notice any duplicate messages. Behavior should be exactly the same as in the split slaves scenario.

ipcpipeline-2.png

Where is the code?

ipcpipeline is part of the GStreamer bad plugins set (here). Documentation is included with the code and there are also some examples that you can try out to get familiar with it. Happy hacking!

by gkiagia at November 17, 2017 01:25 PM

November 10, 2017

Helen Koike

Como começar a contribuir com Open Source

Collabora is participating in the Linux Developer Conference Brazil and this post is written in Portuguese to serve as a guide for the attendees to learn more about how to start contributing to Open Source Software.

Muita gente pensa que para começar a contribuir com um projeto de FOSS (Free and Open Source Software) tem que saber codar, isso é um mito, as pessoas precisam conhecer como o projeto é estruturado como uma comunidade, e muitas vezes para contribuir nem é necessário saber escrever código, pois reportar bugs, realizar testes, contribuir com o design da interface, revisar e arquivar bug reports obsoletos, traduzir o software ou ajudar a organizar times ou conferência são contribuições muito bem-vindas.

Compilando o software a partir do código-fonte

Geralmente o projeto possui alguma página web com diversas informações, inclusive com instruções de como fazer o download do código-fonte. A maioria dos projetos usam algum sistema de controle de versão, atualmente o Git é mais popular, mas pode ser Svn, Cvs, Mercurial e outros. Entre no site do projeto desejado, verifique qual sistema é usado e familiarize-se com as ferramentas necessárias.

Todo projeto é diferente, mas provavelmente você irá encontrar alguns dos seguintes arquivos na base do projeto:

  • README (txt): Contém explicações iniciais do projeto. Comece por aqui, já que usualmente esse arquivo possui informações de como compilar e instalar o software do código-fonte.
  • LICENSE ou COPYING (txt): Possui informações sobre a licença na qual o projeto é distribuído.
  • MAINTAINERS (txt): Descreve quais pessoas são responsáveis por qual parte do projeto.
  • CONTRIBUTING (txt): Descreve mais informações de como participar da comunidade e contribuir.
  • Documents ou docs (pasta): Contém diversas documentações sobre o projeto, tanto de usabilidade quanto sobre a parte técnica.

Quando nos referimos ao projeto "mainline" ou "upstream", significa que é o projeto oficial, onde o desenvolvimento de novas funcionalidades está acontecendo e o que contém as modificações mais recentes. Em geral, ao fazer alguma modificação (patch) no código, ela só será considerada oficial depois de entrar na versão mainline. Por exemplo, o browser que vem em uma distribuição de Linux não é a mainline. Apesar de se basear em uma versão específica do projeto mainline do Linux, a comunidade da distribuição usualmente aplica diversas modificações tanto no código-fonte quanto nas configurações do kernel para atender as necessidades específicas daquela comunidade . Portanto, ao encontrar um bug, é importante testar o código mainline para ver se ele já foi corrigido ou se afeta apenas a versão da sua distribuição.

Como buscar ajuda

Ao buscar ajuda, tenha em mente que a maioria das pessoas da comunidade também são voluntárias como você e não são obrigadas a atender às suas demandas, logo seja educado, verifique se o projeto define algum tipo de código de conduta, mas não tenha medo de perguntar, mostre que você fez uma rápida investigação, isso indica que você está correndo atrás e as pessoas em geral gostam de incentivar gente nova e interessada, por exemplo:

"Olá, eu sou novo no projeto, queria entender sobre X, achei o artigo Y mas ele não parece explicar o que eu gostaria de saber, alguém poderia me explicar ou me indicar onde posso ver essa informação?"

"Olá, estou tentando entender como o código X funciona, me parece que faz a tarefa Y mas estou incerta, existe alguma documentação sobre isso? Procurei e não encontrei. Agradeço se alguém me ajudar"

Se você não tiver certeza que está perguntando na lista de email certa, ou no canal de IRC certo, pergunte onde seria mais apropriado postar a sua pergunta. Caso você não obtenha resposta, não assuma que está sendo ignorado por ter começado agora, as pessoas são ocupadas, espere um pouco (algumas horas no IRC ou uma semana no email) e refaça a pergunta.

Onde buscar ajuda

IRC: Muitos projetos possuem um canal de bate-papo no IRC para a comunidade se coordenar e se ajudar. Veja se o seu projeto possui um canal, baixe um cliente de IRC, conecte no servidor e se junte ao canal. Por exemplo, no servidor da Freenode você pode encontrar os canais #freebsd, #ubuntu, #debian, #python, #docker, há também canais mais específicos, por exemplo, o Debian se organiza por times, logo você pode encontrar no servidor da OFTC os canais #debian-cloud, #debian-mirrors, #debian-ftp entre outros. Muitas vezes o projeto possui canais específicos para quem está começando, como o #kernelnewbies na OFTC.

Guia para configurar o seu IRC https://fedoramagazine.org/beginners-guide-irc/

Listas de emails / fóruns: Procure se o projeto possui alguma lista de email ou fórum para discussão, por exemplo, o Kernel possui uma lista para cada subsistema. Procure a lista apropriada e se inscreva, muitas listas disponibilizam os archives dos emails passados, útil quando está procurando sobre algum tópico que já foi discutido. A dica aqui é fazer bottom post (responder emails em baixo ou entre a cópia) utilizado pela maioria dos projetos. Caso não obtenha resposta em uma ou duas semanas, verifique se mandou a sua pergunta para a lista de email mais apropriada ou as vezes as pessoas estão simplesmente ocupadas, eu geralmente respondo a mesma thread the email com a palavra "ping" para relembrar as pessoas de responderem.

Discussões em algum sistema: alguns projetos usam o GitHub diretamente para perguntas e discussões, verifique se o projeto usa algum sistema específico para discussões e participe.

Pull requests

Pull request é quando você requisita que suas mudanças seja incluído na mainline. Cada projeto possui a sua maneira de enviar modificações (patches) de código para o projeto, no Linux Kernel por exemplo, você deve mandar os patches no texto do email no formato do git-format-patch, já no FreeBSD, você deve anexar o patch em formato diff unified no sistema de controle de bugs, alguns outros projetos aceitam pull requests pelo sistema do GitHub, verifique com o seu projeto como você deve enviar os patches para a comunidade.

Estrutura da comunidade

Cada comunidade se organiza de uma forma diferente, podemos encontrar os diferentes papeis dentro da comunidade

  • Autor: quem começou o projeto
  • Commiter: quem possui o acesso de commit na mainline
  • Mantenedor: o responsável por revisar e aplicar patches de alguma subparte do projeto ou no projeto todo
  • Colaboradores: que ajudam o projeto em diversos aspectos
  • Time: um subgrupo de colaboradores que fazem alguma tarefa específica do projeto, podendo até fazer o papel de um mantenedor
  • Usuários

É importante conhecer a estrutura da comunidade para saber pra quem fazer perguntas, pedir revisões ou mandar contribuições para o time ou grupo de pessoas trabalhando na área relacionada. Lista de emails ou canais de IRC com escopo muito genérico será mais difícil encontrar alguém que revise e aplique um patch, ou responda uma pergunta muito específica sobre algum assunto.

Exemplos de como algumas comunidades funcionam

Debian:

A comunidade é organizada de maneira bem democrática, o líder do projeto é eleito por voto anual, os trabalhos são divididos por times (Ex. time de mirrors, time DSA para a infraestrutura, time de release que coordena o lançamento da próxima versão), e cada pacote no Debian pode ter como responsável um mantenedor específico ou um time. Logo ao encontrar um bug em um determinado pacote, verifique quem é o responsável, entre em contato e envie seus patches para a pessoa, time ou lista de email certa.

Linux Kernel:

O projeto é mantido por Git, o único commiter da mainline é o Linus Torvalds, o projeto é dividido em diversos subsistemas, cada subsistema possui um mantenedor em que o Linus Torvalds confia e aceita seus pull requests. A organização do desenvolvimento de cada subsistema é bem variado e cada um tem suas regras, tem subsistemas que possuem co-mantenedores e outros que não, cada subsistema normalmente tem um canal de IRC e uma lista de email e documentação.

Aprofundando no código

A maioria das pessoas começam contribuindo com algo tão simples quanto corrigir um erro ortográfico, essa simples contribuição trará conhecimento do workflow completo de como trabalhar com a comunidade, mas muitas vezes encontrar problemas técnicos à serem resolvidos nem sempre é fácil e exige um conhecimento maior do projeto.

Ao se aprofundar no código, verifique quais são os métodos de debug que o projeto utiliza, essas técnicas vai ajudá-lo a entender melhor o código e à informar com mais detalhes o seu problema para outras pessoas. Pesquise onde você consegue visualizar os logs de erro, como incluir no código alguma mensagem de log, veja se consegue executar o projeto passo à passo com ferramentas como GDB, Python Trace. Alguns projetos já possuem testes inclusos, veja também se a comunidade usa alguma ferramenta externa para teste, aprenda como reproduzir os testes e à depurar o código.

Achar um problema à ser resolvido

Caso você tenha encontrado um mal funcionamento no projeto de interesse, comece por aí, verifique se alguém já reportou o bug em alguma lista de email, fórum ou no próprio sistema de controle de bugs, entre em contato com as pessoas envolvidas e peça mais informações. Caso não saiba por onde começar a olhar no código, pergunte às listas de email ou canais de IRC, normalmente as pessoas te apontarão para onde olhar a grosso modo, e assim comece a sua investigação do problema. Reporte o bug para a comunidade para que saibam que o problema já está sendo investigado e que podem te contactar para trabalhar em conjunto, evitando assim retrabalho.

Caso você não tenha dado a "sorte" de encontrar um bug, muitos projetos já possuem uma lista de bugs conhecidos só esperando alguém para adotá-los, procure onde está lista se encontra, analise algum bug que consiga reproduzir e não tenha medo de fazer perguntas.

Dependendo do projeto, muitas vezes dar os passos acima é muito complicado e exige muito conhecimento prévio para entender um bug, no Linux Kernel por exemplo, essa lista de problemas já conhecidos mal existe, as que existem só possuem problemas difíceis para um iniciante. O que eu sugiro nesse caso é que você mude a abordagem, ao invés de tentar achar um problema, estude o código, quando você estiver familiarizado o suficiente vai provavelmente visualizar que o código não é perfeito e ver vários pontos de melhorias. Uma dica é pegar algum código (alguma função, classe, módulo ou driver do projeto) e tente reescrever esse código do zero, utilizando o código original apenas como referência, fazendo perguntas para a comunidade das partes que não entende. O conhecimento adquirido neste exercício vai proporcionar uma visão melhor do código, das APIs internas, expor possíveis problemas, além de integrá-lo melhor na comunidade.

Estágios pagos com mentoria

Uma ótima forma de começar à contribuir com FOSS é através de um estágio direcionado. Há algumas empresas ou fundações de FOSS que financiam programas de estágio remoto de aproximadamente 3 meses, onde o mentor é normalmente um voluntário que propõe uma determinada tarefa dentro do projeto. Com isso você já tem uma direção no que contribuir, ter alguém que você possa fazer perguntas e assumir que é sim o papel delas te responder, acompanhar o seu progresso periodicamente, além de ser pago por isso.

Google Summer of Code (GSoC): Estágio remoto em algum projeto de FOSS pago pela Google durante 3 meses de Maio à Julho para estudantes, confira quais projetos participam, se interessar por algum, verifique as propostas feitas pelos mentores voluntários, veja o processo de seleção, normalmente há algumas tarefas que você precisa realizar na aplicação.

Outreachy: Organizado pela Software Freedom Conservancy, similar ao GSoC para grupos sub-representados na comunidade, não precisa ser estudante, acontece duas vezes ao ano (Maio à Junho, e Dezembro à Fevereiro).

Endless Vacation of Code (EVoC): A Fundação X.org tem o próprio programa pra universitários que querem começar a contribuir. O EVoC pode começar em qualquer mês do ano.

Conferências

Muitos projetos de FOSS organizam conferências para reunirem a comunidade e discutirem problemas atuais de forma colaborativa. Ir à conferências é uma ótima forma de se familiarizar com o projeto e conhecer pessoalmente as pessoas com quem você interage online. Verifique quais são as conferências que o projeto no qual você se interessa realiza, ou quais as principais conferências que as pessoas que você interage participa.

Ajuda de custo para conferências

O problema é que a maioria dessas conferências são fora do Brasil e a viagem fica muito cara, principalmente para estudantes. Felizmente, muita dessas conferências distribuem bolsas para ajuda de custo, a Linux Foundation por exemplo disponibiliza um formulário para requisitar ajuda de custo com passagem de avião e hotel, também há ajuda para grupos sub-representados para incentivar a diversidade na comunidade, e as vezes o próprio projeto possui algum fundo para bolsa. O Debian por exemplo, pode pagar a sua viagem, principalmente se você é uma pessoa que já está ajudando a comunidade, mas mesmos novatos podem conseguir.

Outra forma de conseguir ajuda de custo é se voluntariar par ajudar na organização da conferência, mande um email para a equipe de organização e pergunte se há essa possibilidade.


Espero que essas dicas ajudem, caso tenha alguma dúvida entre em contato ou deixe um comentário

by Helen M. Koike Fornazier (noreply@blogger.com) at November 10, 2017 12:45 PM