Planet Collabora

September 25, 2016

Gustavo Padovan

My talk about Mainline Explicit Fencing at XDC 2016!

Last week I was at XDC in Helsinki where I presented about the Explicit Fencing work we’ve been doing on the Mainline Linux Kernel in the lastest few months. There was a livestream of all presentations during the conference and recorded sections are available. You can check the video of my presentation. Check out the slides too.

If you want to check the code we’ve been writing they are available here:

Linux Kernel: https://git.kernel.org/cgit/linux/kernel/git/padovan/linux.git/log/?h=fences

Mesa: https://git.collabora.com/cgit/user/padovan/mesa.git/log/?h=fences

libdrm: https://git.collabora.com/cgit/user/padovan/libdrm.git/log/?h=fences

kmscube: https://github.com/robclark/kmscube/tree/atomic-fence

Soon we will get Explicit Fencing on Android’s drm_hwcomposer as well so expect updates on this blog with more information about that. :)

Also I would like to take the opportunity to thank Collabora for sponsoring my travel to XDC and Martin Peres for organizing such a great conference. It was my first time attending XDC and my time there was absolutely great, I  have learnt a lot about what the Graphics community have been doing lately and I met the people doing this work. I was happy to see a lot of interest from many people around the Explicit Fencing work we’ve doing.

 

by Gustavo Padovan at September 25, 2016 06:23 PM

September 23, 2016

memcpy.io - Robert Foss

XDC 2016

Alt text

XDC 2016 was hosted in Helsinki at Haaga-Helia. The full program was filmed and is archived here.

[Slides] FastUIDraw - High Performance 2D renderer for GPUs

Kevin Rogovin gave an excellent talk about FastUIDraw, which is a highly optimiser 2d renderer for GPUs.

By agressively targetting GPUs only and limiting the feature set to what is required by a browser FastUIDraw performs >9.3x faster than Cairo-CPU and >4.8x times the previous GPU state of the art SKIA-GL.

Hopefully FastUIDraw can be incorporated into upstream of ChromiumOS and Android.

[Slides] 2D Performance

Martin Perez gave a talk about 2D performance and power consumption of the Xserver.

The xf86-video-intel driver showed quite mixed performance numbers of 1.51-32.6x times the CPU performance. The FPS/Watt measurements showed 0.73-15.1x times the CPU efficiency.

When looking at Cairo traces the power efficiency is actually lower using GPU accelaration than with CPU acceleration. This is somewhat expected in that the Cairo workload is not very high throughput, so the overhead of doing 2D operations is relatively high compared to the actual work.

Overall toolkits are moving away from letting the Xserver do 2D rendering, for reasons of portability and performance.

[Slides] libglvnd Status Update

Andy Ritger gave a talk about the current libglvnd status. The goal of libglvnd is to allow different graphics libraries from potentially different vendors to coexist on a filesystem and in a process. GLX/EGl/OpenGL/OpenGLES/GL are all supported by libglvnd.

Currently mesa supports libglvnd for OpenGL and GLX, with EGL support being in the pipeline.

[Slides] drm_hwcomposer

Sean Paul and Zach Reizner gave an exceptionally well timed talk about the Android && Chromium drm_hwcomposer project. drm_hwcomposer is an implementation of the hwcomposer (HWC) API ontop of the Linux DRM/KMS.

The talk detailed the implications of HWC2 and explicit fencing. A large part of the HWC1 implementation can be removed from drm_hwcomposer since it's made redundant by the fencing support in HWC2. So, for example the worker threads (DrmCompositorWorker and FrameWorker) are no longer necessary and can be removed.

[Slides] Status update of Nouveau

Samuel Pitoiset, Karol Herbst, Pierre Moreau and Martin Perez gave a talk about what has happened in Nouveau land the last year.

Hardware support is as always taking steps forward, with Fermi support is scheduled for Linux v4.9+.

A call to arms for Nouveau compiler optimizations was issued by Martin Perez. There is a lot of low hanging fruit for optimizations is available, and the compiler is overall in a good shape. So feel free to contact Martin or the Nouveau project if you are interested or curious.

As of the Maxwell generation of GPUs the firmware needed now has to be signed, GM20x/GP100 firmware has been released, but support for loading firmware on Tegra has not been provided by NVidia.

Martin made a rather clear point of needing to see some cooperation from NVidia in order for the Nouveau project to be able to make progress with new and upcoming NVidia hardware.

Conclusion

Thanks to the X.Org Foundation and the board of directores for arranging XDC 2016. This post has been a part of work undertaken by my employer Collabora.

by Robert Foss at September 23, 2016 06:20 PM

September 22, 2016

Gustavo Noronha Silva

WebKitGTK+ 2.14 and the Web Engines Hackfest

Next week our friends at Igalia will be hosting this year’s Web Engines Hackfest. Collabora will be there! We are gold sponsors, and have three developers attending. It will also be an opportunity to celebrate Igalia’s 15th birthday \o/. Looking forward to meet you there! =)

Carlos Garcia has recently released WebKitGTK+ 2.14, the latest stable release. This is a great release that brings a lot of improvements and works much better on Wayland, which is becoming mature enough to be used by default. In particular, it fixes the clipboard, which was one of the main missing features, thanks to Carlos Garnacho! We have also been able to contribute a bit to this release =)

One of the biggest changes this cycle is the threaded compositor, which was implemented by Igalia’s Gwang Yoon Hwang. This work improves performance by not stalling other web engine features while compositing. Earlier this year we contributed fixes to make the threaded compositor work with the web inspector and fixed elements, helping with the goal of enabling it by default for this release.

Wayland was also lacking an accelerated compositing implementation. There was a patch to add a nested Wayland compositor to the UIProcess, with the WebProcesses connecting to it as Wayland clients to share the final rendering so that it can be shown to screen. It was not ready though and there were questions as to whether that was the way to go and alternative proposals were floating around on how to best implement it.

At last year’s hackfest we had discussions about what the best path for that would be where collaborans Emanuele Aina and Daniel Stone (proxied by Emanuele) contributed quite a bit on figuring out how to implement it in a way that was both efficient and platform agnostic.

We later picked up the old patchset, rebased on the then-current master and made it run efficiently as proof of concept for the Apertis project on an i.MX6 board. This was done using the fancy GL support that landed in GTK+ in the meantime, with some API additions and shortcuts to sidestep performance issues. The work was sponsored by Robert Bosch Car Multimedia.

Igalia managed to improve and land a very well designed patch that implements the nested compositor, though it was still not as efficient as it could be, as it was using glReadPixels to get the final rendering of the page to the GTK+ widget through cairo. I have improved that code by ensuring we do not waste memory when using HiDPI.

As part of our proof of concept investigation, we got this WebGL car visualizer running quite well on our sabrelite imx6 boards. Some of it went into the upstream patches or proposals mentioned below, but we have a bunch of potential improvements still in store that we hope to turn into upstreamable patches and advance during next week’s hackfest.

One of the improvements that already landed was an alternate code path that leverages GTK+’s recent GL super powers to render using gdk_cairo_draw_from_gl(), avoiding the expensive copying of pixels from the GPU to the CPU and making it go faster. That improvement exposed a weird bug in GTK+ that causes a black patch to appear when shrinking the window, which I have a tentative fix for.

We originally proposed to add a new gdk_cairo_draw_from_egl() to use an EGLImage instead of a GL texture or renderbuffer. On our proof of concept we noticed it is even more efficient than the texturing currently used by GTK+, and could give us even better performance for WebKitGTK+. Emanuele Bassi thinks it might be better to add EGLImage as another code branch inside from_gl() though, so we will look into that.

Another very interesting igalian addition to this release is support for the MemoryPressureHandler even on systems with no cgroups set up. The memory pressure handler is a WebKit feature which flushes caches and frees resources that are not being used when the operating system notifies it memory is scarce.

We worked with the Raspberry Pi Foundation to add support for that feature to the Raspberry Pi browser and contributed it upstream back in 2014, when Collabora was trying to squeeze as much as possible from the hardware. We had to add a cgroups setup to wrap Epiphany in, back then, so that it would actually benefit from the feature.

With this improvement, it will benefit even without the custom cgroups setups as well, by having the UIProcess monitor memory usage and notify each WebProcess when memory is tight.

Some of these improvements were achieved by developers getting together at the Web Engines Hackfest last year and laying out the ground work or ideas that ended up in the code base. I look forward to another great few days of hackfest next week! See you there o/

by kov at September 22, 2016 05:03 PM

Jeremy Whiting

GSettings vs QSettings

A few weeks ago after discussing with Luke Yelavich about what to work on in speech-dispatcher next I decided to take a stab at making it use GSettings for its settings. (You can see the work in progress here if you like.) I've used GSettings before for work projects so thought it would be a good/easy thing to take on.

There are many advantages of using GSettings over plain ini-style files.
  • Type checking (You can't enter a string for a numeric setting for example).
  • Notification of setting changes.
  • Command-line changing of settings.
  • Default values for settings defined in the schema(s).

On that wip branch speech-dispatcher itself has been changed to use GSettings and also reacts to many setting changes dynamically. It doesn't react to changing the port type or port number or unix socket path dynamically, since we have no mechanism to tell client applications that it is changing. There are also GSettings schemas for the output modules, just need to make them read their settings from GSettings instead of the old ini-style .conf files. spd-conf also has been modified to write to GSettings rather than .conf files. That change alone reduced the spd-conf python script by quite a few lines of code and made it a lot easier to read.

As I was doing this work I got thinking about the differences between GSettings and QSettings. Besides one being glib/c based and the other being Qt/C++ they are really pretty similar. There are a few differences though:
  • QSettings doesn't emit signals when a setting changes. (I found a few forum posts asking why this is with possible workarounds. Nothing built into QSettings though).
  • QSettings doesn't have a schema for the settings themselves. There's no way to introspect a setting file to see what settings are possible. It just depends what keys the application reads.
  • QSettings doesn't have a command-line tool to set the settings. Since QSettings is cross platform it uses the Registry by default on Windows, PList files by default on macOS, and ini-style files on linux
  • QSettings does have type checking, but no range checking or anything like that.

I was a bit disappointed that QSettings that I've used for many many years is lacking these seemingly obvious and probably useful features. I wonder if we as a community implemented these features in QSettings if the Qt company would accept them.

by Jeremy Whiting (noreply@blogger.com) at September 22, 2016 04:09 PM

September 06, 2016

Andrew Shadura

Manual control of OpenEmbedded -dbg packages

In December last year, OpenEmbebbed introduced automatic debug packages. Prior to that, you’d need to manually construct FILES_${PN}-dbg variable in your recipe. If you need to retain manual control over precisely what does into debug packages, set an undocumented NOAUTOPACKAGEDEBUG variable to 1, the same way Qt recipe does:

NOAUTOPACKAGEDEBUG = "1"
FILES_${PN}-dev = "${includedir}/${QT_DIR_NAME}/Qt/*"
FILES_${PN}-dbg = "/usr/src/debug/"
FILES_${QT_BASE_NAME}-demos-doc = "${docdir}/${QT_DIR_NAME}/qch/qt.qch"

P.S. Knowing this would have saved me and my colleagues days of work.

September 06, 2016 12:28 PM

Gustavo Padovan

Mainline Explicit Fencing – part 1

When it comes to buffer sharing synchronization in the kernel there are two ways of doing it: Implicit Fencing and Explicit Fencing. The difference between them relies on the fact that the kernel may or may not share synchronization information with userspace, it will either be implicit, with no fencing information provided, or explicit with all information available to userspace.

The fencing synchronization mechanism allows the sharing of buffers without the risk of a driver or userspace to read an incomplete buffer or write to a buffer that is still under use somewhere else in the system. The fencing provides ordering to these operations to make reads or writes happen only when the buffer is not used by other drivers anymore. For example,when a GPU job is queued a fence is associated to the buffer in the job, that fence can be used by other drivers for synchronization purposes, they won’t use the buffer a signal from the fence is received. The signal means the buffers is now free to be used. Similarly we can have the same setting for the GPU driver to wait the buffer to come out of the screen to render on it again.

The central piece here is the fence, an element that is attached to each buffer whenever a request involving the buffer is sent to the kernel. The fence can be used by userspace or other drivers to wait for the work to finish. So once the work is finished the fence signals and the waiter can proceed and do whatever they want with the buffer.

While Implicit Fencing  helps a lot with buffer synchronization there are a few cases where the whole desktop compositing could stall. Imagine the following compositor flow: there are 3 buffers to process, A, B and C. A and B are sent for rendering in parallel while C is going to be composed of both A and B. But the compositor will only be notified when both buffers are rendered thus if B takes too long the compositing of the whole desktop will be blocked waiting for B and C won’t be displayed in time.

A compositor processing two buffers in parallel

A compositor processing two buffers in parallel, with Implicit Fencing if B takes too long the desktop compositor freezes.

However with Explicit Fencing the compositor should have one fence for each buffer and will be notified when each buffer is rendered. So if A renders fast and B takes too long the compositor can decide not wait for B and proceed with the scanout of C with buffer A but an old version of B. The fencing information allows the compositor to be smart and take decisions to avoid the screen to freeze for example.

As of today the Linux Kernel only has generic APIs for Implicit Fencing, although some drivers have Explicit Fencing already their APIs are device specific. Android currently has its own implementation through the Android Sync Framework – which will be explained in the next article.

Explicit Fencing works on a Consumer-Producer fashion. In an GPU rendering + scanout to the screen pipeline it would synchronize between the kernel drivers, so when submitting a new rendering job to the GPU(Producer side) userspace would get back a fence related to that buffer submitted. That means userspace doesn’t need to block waiting for the job to complete, a signal is sent when the job is finished. As userspace doesn’t need to block it and has a fence of the buffer it then can proceed right away with the syscall to ask the display hardware(Consumer) to scanout the buffer that is yet to be processed. With explicit fencing the kernel is taught to wait for the fence to signal, before starting the scanout process.

A new fence is returned to userspace when the buffer is submitted to the kernel for scanout on the display hardware, that fence will signal when the buffer is not being displayed anymore, thus is ready for reuse by another rendering job. When the userspace gets this fence back it can submit a new rendering job to the GPU without waiting. The wait is done on the kernel side by the GPU driver, once the fence signals the rendering on that buffer can be initiated.

Explicit Fencing

The fence travels all the way to userspace and the next element on the pipeline. The yellow arrows represents the fences on userspace.

Last but not least, debugability of the graphics pipeline is improved. Having access to the fence in userspace helps a lot understanding what is happening in the pipeline. Previously, with Implicit Fencing there was no infomation available, so it was hard to figure out what was happening on the pipeline, also each vendor was trying to implement their own Implicit Fencing mechanism. Now with an standard Explicit Fencing mechanism it easier to build debug/tracing infrastructure that can be used to investigate issues in any system.

The next article will explain the Android Sync Framework and later the work on mainline to support explicit fencing will be described.

by Gustavo Padovan at September 06, 2016 10:00 AM

August 31, 2016

Helen Koike

LinuxCon NA 2016 - Highlights

After visiting FISL this summer, my travels have now taken me to LinuxCon NA 2016 in Toronto.
 
As everyone knows, the hot topic of the moment is containers, and they were everywhere at LinuxCon. Several companies are working in this market, there are even hardware optimized for getting the best performance on containers!
However, besides containers, there were several other different subjects of which I had more contact with:
memory-driven computers, workqueues, bluetooth, graphics, file systems, power saving (check the talk highlights below).
I also met several amazing people working in different fields and contributing with the free software community.

The place:

The infrastructure of the event was great, wifi worked everywhere. There was breakfast for attendees and snacks during the small breaks during the day.
In the main Hall, there were several couches and tables, and the conference rooms were great.


Each morning there were keynotes that were hosted in a big fancy room.  These were also streamed to the main hall so other people could watch.
In the afternoons there were several talks happening in parallel in smaller rooms.
 
 

The women lunch:

On the first day there was a women's only lunch event promoted by Intel, which was populated by 100+ women from the tech field. I've never seen so many of us reunited like that!
It was a great event to socialize and learn where everybody works. Several of them work directly with coding, but not the majority.
It was a pleasure to meet everyone and I am looking forward to see even more women in tech.
 
 

Booths (Hightlights):

 

 HP:

In this booth I met The Machine, which is based in a Memory-driven computer architecture that promises to revolutionize how we know computers today.
The main memory is based in memristors, which can be viewed as a non-volatile RAM, so instead of having our basic model of Caches/Main Memory/Disk we would only have one memory based on memristors, all connected through a photonic fabric instead of a copper bus.
This changes our current programing model. HP have a github available with a framework where you can emulate the hardware, test and start programing for it.
 
 
 

Diamanti:

Diamanti is a company that offers a hardware based solution to optimize containers and virtual environments, as mentioned in my NVMe post, I am working in a patch to optimize performance of shared NVMe device for a guest system in software while Diamanti, instead of sharing a NVMe device by software, make their hardware pretend there are multiple NVMe devices and they attach each of this devices directly to a container or virtual machine, thus from a software point of view, the container controls the device without having the VMM interfering.
They also do the same this for other peripherals beside NVMe as network cards.
 

Ubuntu:

Besides the Linux distribution (Ubuntu), this booth was presenting Juju, which is a tool to manage your services in the cloud, and also LXD, an hypervisor for containers
 

Docker:

As most of you know, the Docker project is a great tool to create containers, which are something in the middle of a virtual machine and a chroot, it uses the kernel from the host.
Docker is also the name of the company (I thought is was only the name of the project) and they use LXC as a base to create containers.
The company provides services for other companies using the Docker project as setting up the infrastructure, setting a private Docker Hub, providing support, etc.
 

Microsoft:

Why was Microsoft was in LinuxCon? To declare its love for Linux! :)
In this booth I obtained many stickers written "Microsoft loves Linux".I guess they decided to stop fighting old battles and be friends with Linux in the server market.
 

CoreOs:

CoreOs is a Linux distribution mostly meant to be a lightweight host system for docker containers.
Kubernetes is a tool for managing containers, automating deployment and scaling. So used in conjunction with CoreOs is a good match.
 

Talks (Hightlights):

 

Btrfs with High Speed Devices - Chris Mason, Facebook:

Currently the maintainer of Btrfs, Chris Mason talked about this file system, tools to debug and how to identify bottlenecks.
One of the bottlenecks was btree locking, where he presented a patch that has a new locking scheme that optimizes the file system.
 

Open Source Bluetooth Device Firmware for IoT and Makers - Marcel Holtmann, Intel:

In this talk, Marcel Hltmann gave a great overview of the Bluetooth stack and mentioned that Bluetooth 5.0 is coming with support for mesh network.
As he is the maintainer of the Bluetooth stack on Linux, he talked about BlueZ and other Bluetooth tools in Linux.
For IoT and Makers who usually use an nRF51/nRF52 Bluetooth chip with the proprietary SoftDevice firmware, Marcel talked about how we could use Zyphis or MyNewt (which are open source) instead of SoftDevice and how he managed to get it working on Arduino 101.
 

Async Execution with Workqueue - Bhaktipriya Shridhar, Linux Kernel

Bhaktipriya Shridhar gave a talk about her Outreachy project on workqueues and how she managed to migrated several drivers from the old API to the new one.
Workqueues is a mechanism in Linux Kernel to execute pieces of code in asynchronous fashion, in short: if you have a function to execute and you don't want to wait for it to return, you can add it in the workqueue.
Internally, the kernel has two API's, the old one, with several issues as proliferation of kernel threads (it could run out of process IDs before even executing user space), deadlocks (if wasn't handled correctly) and unnecessary context switches. And the new API, the Concurrency Managed Workqueue (cmwq), which solves most of these issues.

 

Kernel Internship Report and Outreachy Panel - Moderated by Karen Sandler, Software Freedom Conservancy; Helen Fornazier, Rik Van Riel & Bhaktipriya Shridhar

 
Outreachy is 3 month internship meant to promote the presence of minorities in free software community.
If you know what GSoC is, Outreachy is similar with small differences in the projects (not necessary about coding), the selection phase, who can participate, etc.

In the panel we had 2 former mentors Rik Van Riel and Tiffany Antopolski (who is also a former intern), Bhaktipriya Shridhar (current intern in the linux kernel), myself as former intern and Karen Sandler as host and part of the organization of the Outreachy program at Software Freedom Conservancy.
Each one shared their experience as a mentor or as an intern.
 
 
 

CPUfreq and The Scheduler: Revolution in CPU Power Management - Rafael J. Wysocki, Intel OTC

To save power when the system can't go idle, CPUFreq can decrease or increase the clock frequency of the CPU based in the current work load.
Rafael Wysocki (ACPI core maintainer) explained the architecture of the old system that was based on timers, that would sample the load from time to time and update the clock frequency accordingly. The new system provides a much better result by using a Scheduler-driven mechanism instead of timers, using data from the scheduler to make decisions on the next frequency.

 

Bringing Android Explicit Fencing to Mainline - Gustavo Padovan, Collabora Ltd.

 
In this talk, Gustavo Padovan explained how graphic fences are exposed to userspace to synchronize buffer sharing and increase performance compared to the implicit fencing where userspace is not aware.

 
 

The gala party:


In the last day we had a great gala for the 25th anniversary of Linux.



I had the pleasure to have a great conversation with Eduardo Habkost from Red Hat who has worked with virtualization for 10+ years and gave me a great explanation on how Qemu connects with KVM.




This conference was not only about getting updates around the Linux community, but I also had the pleasure to meet great people and finally meet in person several people who I only knew through IRC, and was able to confirm that they were not bots! xD

Special thanks to Allison Lortie and William Hua from Canonical who showed us the city and made it such a pleasant trip.
 



by Helen Fornazier (noreply@blogger.com) at August 31, 2016 06:34 PM

August 30, 2016

memcpy.io - Robert Foss

Building Android for Qemu

Alt text

Developing Linux for Android on Qemu allows you to do some things that are not necessarily possible using the stock emulator. For my purposes I need access to a GPU and be able to modify the driver, which is where Virgilrenderer and Qemu comes in handy.

The guide below helps you compile Android and run it on top of Qemu with Mesa/Virgilrenderer supplying a virtual GPU. Because of this, the following guide is aimed at Linux hosts.

This guide is based on Rob Herrings fantastic guide, but has been slightly streamlined and had physical hardware support stripped out.

Install dependencies

These dependencies were available on Ubuntu 16.04, some alternative packages might be needed for other distributions.

sudo apt install autoconf gcc-aarch64-linux-gnu libaio-dev libbluetooth-dev libbrlapi-dev libbz2-dev libcap-dev libcap-ng-dev libcurl4-gnutls-dev libepoxy-dev libfdt-dev libgbm-dev libgles2-mesa-dev libglib2.0-dev libibverbs-dev libjpeg8-dev liblzo2-dev libncurses5-dev libnuma-dev librbd-dev librdmacm-dev libsasl2-dev libsdl1.2-dev libsdl2-dev libseccomp-dev libsnappy-dev libssh2-1-dev libtool libusb-1.0-0 libusb-1.0-0-dev libvde-dev libvdeplug-dev libvte-2.90-dev libxen-dev valgrind xfslibs-dev xutils-dev zlib1g-dev

Set up paths

Naturally all of the paths below are configurable, this is just what I used.

export PROJECT_PATH="/opt/qemu_android"
export VIRGLRENDERER_PATH="${PROJECT_PATH}/virglrenderer"
export QEMU_PATH="${PROJECT_PATH}/qemu"
export LINUX_PATH="${PROJECT_PATH}/linux"
export ANDROID_PATH="${PROJECT_PATH}/android"
export ANDROID_TOOLS_PATH="${PROJECT_PATH}/android-tools"

Virglrenderer

Virglrenderer creates a virtual 3D GPU, that allows the Qemu guest to use the graphics capabilities of the host machine.

git clone git://git.freedesktop.org/git/virglrenderer ${VIRGLRENDERER_PATH}
cd ${VIRGLRENDERER_PATH}
./autogen.sh
make
sudo make install

Qemu

Qemu is a full system emulator, and supports a multitude of machine architectures. We're going to to use x86_64 but also build support for arm64/aarch64.

git clone git://git.qemu-project.org/qemu.git ${QEMU_PATH}
mkdir ${QEMU_PATH}/build
cd ${QEMU_PATH}/build
../configure --target-list=aarch64-softmmu,x86_64-softmmu --enable-gtk --with-gtkabi=3.0 --enable-kvm
make -j

Linux kernel

Build trunk of mainline linux kernel.

Important: The below instructions use upstream/master but during testing of this guide, https://git.kernel.org/pub/scm/linux/kernel/git/padovan/linux.git and the fences branch was used due to SW_SYNC not yet being included in upstream. Inclusion is targeted for v4.9.

git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git ${LINUX_PATH}
cd ${LINUX_PATH}
wget http://memcpy.io/files/2016-08-30/Kconfig -O ${LINUX_PATH}/.config
make oldconfig
make -j

Important: If you decide not to use the .config linked in this step, a few Kconfig options need to be set:

CONFIG_ANDROID=y
CONFIG_ANDROID_BINDER_IPC=y
CONFIG_AUDIT=y
CONFIG_HAVE_ARCH_AUDITSYSCALL=y
CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_WATCH=y
CONFIG_AUDIT_TREE=y
CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y
CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=1
CONFIG_SECURITY_SELINUX_DISABLE=y
CONFIG_SECURITY_SELINUX_DEVELOP=y
CONFIG_SECURITY_SELINUX_AVC_STATS=y
CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=0
CONFIG_DEFAULT_SECURITY_SELINUX=y
CONFIG_DEFAULT_SECURITY="selinux"
CONFIG_VIRTIO_BLK=y
CONFIG_SCSI_VIRTIO=y
CONFIG_VIRTIO_NET=y
CONFIG_VIRTIO_CONSOLE=y
CONFIG_HW_RANDOM_VIRTIO=y
CONFIG_DRM_VIRTIO_GPU=y
CONFIG_VIRT_DRIVERS=y
CONFIG_VIRTIO=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_PCI_LEGACY=y
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_INPUT=y
CONFIG_VIRTIO_MMIO=y
CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y
CONFIG_NET_9P=y
CONFIG_NET_9P_VIRTIO=y
CONFIG_SYNC=y
CONFIG_SW_SYNC=y
CONFIG_SYNC_FILE=y

Android

Build the Android Open Source Project.

Important: When running source build/envsetup.sh make sure that you are using bash. I had issues running lunch using zsh.

mkdir ${ANDROID_PATH}
cd ${ANDROID_PATH}
repo init -u https://android.googlesource.com/platform/manifest -b master
cd .repo
git clone https://github.com/robherring/android_manifest.git -b android-6.0 local_manifests
cd ..
repo sync -j
cd device/linaro/generic
make defconfig
make all
cd ../../..
# The following snippet must be run in bash
bash
source build/envsetup.sh
# Select linaro_x86_64-userdebug
lunch
make -j
# We don't need to use bash any longer
exit

As of this writing DRM fences related patches by Gustavo Padovan have yet to be included into AOSP, and therefore have to be included manually until it is upstreamed. After switching to this branch, the AOSP project has to be rebuilt again.

cd $ANDROID_PATH/system/core/
git remote add padovan git://git.collabora.com/git/user/padovan/android-system-core.git
git fetch padovan
git checkout padovan/master

mkbootimg

Fetch the make boot image script. This script later assembles the boot image, boot.img.

git clone https://android.googlesource.com/platform/system/core.git $ANDROID_TOOLS_PATH

Run Qemu machine

When running the below script, make sure that the all of the paths from step two have been exported.

wget http://memcpy.io/files/2016-08-30/boot_android_qemu.sh -O ${PROJECT_PATH}/boot_android_qemu.sh
chmod +x ${PROJECT_PATH}/boot_android_qemu.sh
${PROJECT_PATH}/boot_android_qemu.sh x86_64

Conclusion

Hopefully this guide will have enabled you build the required software and run Android on Qemu with a virtual GPU. The post was has been a part of work undertaken by my employer Collabora.

by Robert Foss at August 30, 2016 01:22 PM

Building Android for Qemu (with Mesa and Virgil3D)

Alt text

Developing Linux for Android on Qemu allows you to do some things that are not necessarily possible using the stock emulator. For my purposes I need access to a GPU and be able to modify the driver, which is where Virgilrenderer and Qemu comes in handy.

The guide below helps you compile Android and run it on top of Qemu with Mesa/Virgilrenderer supplying a virtual GPU. Because of this, the following guide is aimed at Linux hosts.

This guide is based on Rob Herrings fantastic guide, but has been slightly streamlined and had physical hardware support stripped out.

Install dependencies

These dependencies were available on Ubuntu 16.04, some alternative packages might be needed for other distributions.

sudo apt install autoconf gcc-aarch64-linux-gnu libaio-dev libbluetooth-dev libbrlapi-dev libbz2-dev libcap-dev libcap-ng-dev libcurl4-gnutls-dev libepoxy-dev libfdt-dev libgbm-dev libgles2-mesa-dev libglib2.0-dev libibverbs-dev libjpeg8-dev liblzo2-dev libncurses5-dev libnuma-dev librbd-dev librdmacm-dev libsasl2-dev libsdl1.2-dev libsdl2-dev libseccomp-dev libsnappy-dev libssh2-1-dev libtool libusb-1.0-0 libusb-1.0-0-dev libvde-dev libvdeplug-dev libvte-2.90-dev libxen-dev valgrind xfslibs-dev xutils-dev zlib1g-dev

Set up paths

Naturally all of the paths below are configurable, this is just what I used.

export PROJECT_PATH="/opt/qemu_android"
export VIRGLRENDERER_PATH="${PROJECT_PATH}/virglrenderer"
export QEMU_PATH="${PROJECT_PATH}/qemu"
export LINUX_PATH="${PROJECT_PATH}/linux"
export ANDROID_PATH="${PROJECT_PATH}/android"
export ANDROID_TOOLS_PATH="${PROJECT_PATH}/android-tools"

Virglrenderer

Virglrenderer creates a virtual 3D GPU, that allows the Qemu guest to use the graphics capabilities of the host machine.

git clone git://git.freedesktop.org/git/virglrenderer ${VIRGLRENDERER_PATH}
cd ${VIRGLRENDERER_PATH}
./autogen.sh
make -j7
sudo make install

Qemu

Qemu is a full system emulator, and supports a multitude of machine architectures. We're going to to use x86_64 but also build support for arm64/aarch64.

git clone git://git.qemu-project.org/qemu.git ${QEMU_PATH}
mkdir ${QEMU_PATH}/build
cd ${QEMU_PATH}/build
../configure --target-list=aarch64-softmmu,x86_64-softmmu --enable-gtk --with-gtkabi=3.0 --enable-kvm
make -j7

Linux kernel

Build trunk of mainline linux kernel.

Important: The below instructions use upstream/master but during testing of this guide, https://git.kernel.org/pub/scm/linux/kernel/git/padovan/linux.git and the fences branch was used due to SW_SYNC not yet being included in upstream. Inclusion is targeted for v4.9.

git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git ${LINUX_PATH}
cd ${LINUX_PATH}
wget http://memcpy.io/files/2016-08-30/Kconfig -O ${LINUX_PATH}/.config
make oldconfig
make -j7

Important: If you decide not to use the .config linked in this step, a few Kconfig options need to be set:

CONFIG_ANDROID=y
CONFIG_ANDROID_BINDER_IPC=y
CONFIG_AUDIT=y
CONFIG_HAVE_ARCH_AUDITSYSCALL=y
CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_WATCH=y
CONFIG_AUDIT_TREE=y
CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y
CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=1
CONFIG_SECURITY_SELINUX_DISABLE=y
CONFIG_SECURITY_SELINUX_DEVELOP=y
CONFIG_SECURITY_SELINUX_AVC_STATS=y
CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=0
CONFIG_DEFAULT_SECURITY_SELINUX=y
CONFIG_DEFAULT_SECURITY="selinux"
CONFIG_VIRTIO_BLK=y
CONFIG_SCSI_VIRTIO=y
CONFIG_VIRTIO_NET=y
CONFIG_VIRTIO_CONSOLE=y
CONFIG_HW_RANDOM_VIRTIO=y
CONFIG_DRM_VIRTIO_GPU=y
CONFIG_VIRT_DRIVERS=y
CONFIG_VIRTIO=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_PCI_LEGACY=y
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_INPUT=y
CONFIG_VIRTIO_MMIO=y
CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y
CONFIG_NET_9P=y
CONFIG_NET_9P_VIRTIO=y
CONFIG_SYNC=y
CONFIG_SW_SYNC=y
CONFIG_SYNC_FILE=y

Android

Build the Android Open Source Project.

Important: When running source build/envsetup.sh make sure that you are using bash. I had issues running lunch using zsh.

mkdir ${ANDROID_PATH}
cd ${ANDROID_PATH}
repo init -u https://android.googlesource.com/platform/manifest -b master
cd .repo
git clone https://github.com/robherring/android_manifest.git -b android-6.0 local_manifests
cd ..
repo sync -j20
cd device/linaro/generic
make defconfig
make all
cd ../../..
# The following snippet must be run in bash
bash
source build/envsetup.sh
# Select linaro_x86_64-userdebug
lunch
make -j7
# We don't need to use bash any longer
exit

As of this writing DRM fences related patches by Gustavo Padovan have yet to be included into AOSP, and therefore have to be included manually until it is upstreamed. After switching to this branch, the AOSP project has to be rebuilt again.

cd $ANDROID_PATH/system/core/
git remote add padovan git://git.collabora.com/git/user/padovan/android-system-core.git
git fetch padovan
git checkout padovan/master

mkbootimg

Fetch the make boot image script. This script later assembles the boot image, boot.img.

git clone https://android.googlesource.com/platform/system/core.git $ANDROID_TOOLS_PATH

Run Qemu machine

When running the below script, make sure that the all of the paths from step two have been exported.

wget http://memcpy.io/files/2016-08-30/boot_android_qemu.sh -O ${PROJECT_PATH}/boot_android_qemu.sh
chmod +x ${PROJECT_PATH}/boot_android_qemu.sh
${PROJECT_PATH}/boot_android_qemu.sh x86_64

Conclusion

Hopefully this guide will have enabled you build the required software and run Android on Qemu with a virtual GPU. This post has been a part of work undertaken by my employer Collabora.

by Robert Foss at August 30, 2016 01:22 PM

August 25, 2016

Gustavo Padovan

Slides for my LinuxCon talk on Mainline Explicit Fencing

For those of you that are interested here are the slides of the my presentation at LinuxCon North America this week. The conference was great with very good talks and very interesting meetings on the hallway track.

My presentation covered the effort to create the Explicit Fencing mechanism on the Linux Kernel which is to be used mainly by the Graphics pipeline. In short, Explicit Fencing is a way to give userspace information about the current state of shared buffers inside the kernel. This is done through fences, that can then be passed around to userspace and/or other kernel drivers for synchronization purposes. This allows both userspace and kernel to wait for kernel jobs to finish without blocking. It also significantly helps the compositor take more efficient and smart decisions on scheduling frames to display on the screen. I’ll be posting an article with more details on it soon. :)

Finally I would like to thank Collabora for sponsoring my travel to LinuxCon.

by Gustavo Padovan at August 25, 2016 03:45 PM

memcpy.io - Robert Foss

Ethernet device stress testing

Alt text

During testing of power management patches for usb ethernet dongles, a script was needed to stress test connecting/disconnecting/reconnecting these devices.

Luckily a script like that already exists as a part of the chromiumos project, and can be found here.

That script does however not run standalone and requires a remote device (chromebook) to execute on. So I took the liberty of changing it to support local testing. The modified version can be found here.

This might come in handy for someone, if not, the script will at least be archived on this site.

Example

$ sudo pip2.7 install autotest
$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DORMANT group default qlen 1000
    link/ether 48:e2:44:f6:e8:5b brd ff:ff:ff:ff:ff:ff
27: enx000ec689ab9e: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:0e:c6:89:ab:9e brd ff:ff:ff:ff:ff:ff
$ export INTERFACE=enx000ec689ab9e
$ export NUM_ITERATIONS=10
$ sudo python network_EthernetStressPlug.py $INTERFACE $NUM_ITERATIONS

by Robert Foss at August 25, 2016 11:05 AM

August 23, 2016

Helen Koike

Increased performance of emulated NVMe devices

Nowadays, in Google Cloud Engine (GCE), it is possible to attach a local SSD with the NVMe interface to your virtual machine. Unfortunately, you only get a good number of iops (input/output operations per second) if you instantiate a machine with nvme-backports-debian-7-wheezy image; other available distributions on GCE will have a lower number of iops.

It turns out that Google's Virtual Machine Monitor (aka Hypervisor) implements a custom NVMe command that allows it to increase up to 4 times the number of  iops (note: this is from what I've tested so far, but it seems to be possible to get up to 5 times faster according to the original commit message; check the  Technical Details sessions to see how this is possible), however the kernel you use needs to support it and this is not yet the case with the mainline kernel.

This is not exclusive to GCE as Google released a patch not only to the kernel  but also to the qemu and is available here.

Collabora has been helping update, refactor and review the patches to the Linux Kernel to send it upstream, however since this is not yet an official nvme standard, it shouldn't be merged into Kernel mainline, as its specification may still receive changes.

Seeing as it considerably increases performance, the feature is in the process of being discussed and proposed to the NVMe workgroup with Collabora's help.
While the nvmexpress.org seems interested in adding an official extension to stardarize it, as published in the mailing list, nothing has been defined yet, as this is a very recent discussion and it can take up to a year to be ratified by the NVMe workgroup.

So, for the time being, you can get a more recent version of the patch and install the driver yourself here: https://git.collabora.com/cgit/user/koike/linux.git/log/?h=nvme/dev

How it works?

Technical details

 

The NVMe interface basicaly works with command queues. The drive writes a command in a region known to both (driver and device controller) and then updates the tail of the queue, writting to an MMIO register called doorbell.

In an environment with several guest OSes on top of a VMM sharing a resource, communication between the guest OS and the real device is usually trapped by the VMM. As an MMIO is usually a syncronous acces to the device, it means that every MMIO access will cause a trap.

Example of emulated device in the VMM
The main idea here is to decrease the number of traps to the VMM by reducing the number of writtes to the doorbells.

This is achieved in two ways:
    1) Batching; or
    2) Letting the VMM pull the current doorbell value when it is already in execution.

The first one is easy, we can wait X commands to be written in the queue to ring the doorbell.
The second one is a bit more complicated. The guest OS needs to inform the emulated device in the VMM where it can pull the doorbell values, and the emulated NVMe device needs to inform the guest OS that it can restart the counter of X.

This is what this new feature does:
It adds a new command in the NVMe interface where the driver can send to the NVMe device controller two memory buffers:
1) A buffer where the real doorbell values are: Instead of writting to the MMIO  doorbell, the driver writtes the value in this buffer; and
2) Another buffer with a hint from the controller about how many commands the driver can write in the queue without ringing the doorbell.


The exact technical details may still change in the future, especially on how to properly implement the second item above. It is also very likely that Google's patches won't be compliant with the future ratified standard.

For the time being though, you can use the Collabora tree. Please let me know if you have any comments/feedback!

by Helen Fornazier (noreply@blogger.com) at August 23, 2016 09:25 PM

August 12, 2016

Philip Withnall

Thoughts about reviewing large patchsets

I have recently been involved in reviewing some large feature patchsets for a project at work, and thought it might be interesting to discuss some of the principles we have been trying to stick to when going about these reviews.

These are just suggestions which will not apply verbatim to every project, but they might provoke some ideas which are relevant to your project. In many cases, a developer might find a checklist is not useful for them — it is too rigid, or slows down the pace of development too much. Checklists are probably best treated as strong suggestions, rather than strict requirements for a review to pass; developers need to use them as reminders for things they should think about when submitting a patch, rather than a box-ticking exercise. Checklists probably work better in larger projects with lots of infrequent or inexperienced contributors, who may not be aware of all of the conventions of the project.

Overall principles

In order to break a set of reviews down into chunks, there a few key principles to stick to:

  • Review patches which are as small as possible, but no smaller (see here, here and here)
  • Learn from each review so the same review comments do not need to be made more than once
  • Use automated tools to eliminate many of the repetitive and time consuming parts of patch review and rework
  • Do high-level API review first, ideally before implementing those APIs, to avoid unnecessary rework

Pre-submission checklist

(A rationale for each of these points is given in the section below to avoid cluttering this one.)

This is the pre-submission checklist we have been using to check patches against before submission for review:

  1. All new code follows the coding guidelines, especially the namespacing guidelines, memory management guidelines, pre- and post-condition guidelines, and introspection guidelines — some key points from these are pulled out below, but these are not the only points to pay attention to.
  2. All new public API must be namespaced correctly.
  3. All new public API must have a complete and useful documentation comment.
  4. All new public API documentation comments must have GObject Introspection annotations where appropriate; g-ir-scanner (part of the build process) must emit no warnings when run with --warn-all --warn-error (which should be set by $(WARN_SCANNERFLAGS) from AX_COMPILER_FLAGS).
  5. All new public methods must have pre- and post-conditions to enforce constraints on the accepted parameter values.
  6. The code must compile without warnings, after ensuring that AX_COMPILER_FLAGS is used and enabled in configure.ac (if it is correctly enabled, compiling the module should fail if there are any compiler warnings) — remember to add $(WARN_CFLAGS), $(WARN_LDFLAGS) and $(WARN_SCANNERFLAGS) to new Makefile.am targets as appropriate.
  7. The introduction documentation comment for each new object must give a usage example for each of the main ways that object is intended to be used.
  8. All new code must be formatted as per the project’s coding guidelines.
  9. If possible, there should be an example program for each new feature or object, which can be used to manually test that functionality — these examples may be submitted in a separate patch from the object implementation, but must be submitted at the same time as the implementation in order to allow review in parallel. Example programs must be usable when installed or uninstalled, so they can be used during development and on production machines.
  10. There must be automated tests (using the GTest framework in GLib) for construction of each new object, and for getting and setting each of its properties.
  11. The code coverage of the automated tests must be checked (using make check-code-coverage and AX_CODE_COVERAGE) before submission, and if it’s possible to add more automated tests (and for them to be reliable) to improve the coverage, this should be done; the final code coverage figure for the object should be mentioned in a comment on the patch review, and it would be helpful to have the lcov reports for the object saved somewhere for analysis as part of the review.
  12. There must be no definite memory leaks reported by Valgrind when running the automated tests under it (using AX_VALGRIND_CHECK and make check-valgrind).
  13. All automated tests must be installed as installed-tests so they can be run as integration tests on a production system.
  14. make distcheck must pass before submission of any patch, especially if it touches the build system.
  15. All new code has been checked to ensure it doesn’t contradict review comments from previous reviews of other patches (i.e. we want to avoid making the same review comments on every submitted patch).
  16. Commit messages must explain why they make the changes they do.

Rationales

  1. Each coding guideline has its own rationale for why it’s useful, and many of them significantly affect the structure of a patch, so are important to get right early on.
  2. Namespacing is important for the correct functioning of a lot of the developer tools (for example, GObject Introspection), and to avoid symbol collisions between libraries — checking it is a very mechanical process which it is best to not have to spend review time on.
  3. Documentation comments are useful to both the reviewer and to end users of the API — for the reviewer, they act as an explanation of why a particular API is necessary, how it is meant to be used, and can provide insight into implementation choices. These are questions which the reviewer would otherwise have to ask in the review, so writing them up lucidly in a documentation comment saves time in the long run.
  4. GObject Introspection annotations are a requirement for the platform’s language bindings (to JavaScript or Python, for example) to work, so must be added at some point. Fixing the error messages from g-ir-scanner is sufficient to ensure that the API can be introspected.
  5. Pre- and post-conditions are a form of assertion in the code, which check for programmer errors at runtime. If they are used consistently throughout the code on every API entry point, they can catch programmer errors much nearer their origin than otherwise, which speeds up debugging both during development of the library, and when end users are using the public APIs. They also act as a loose form of documentation of what each API will allow as its inputs and outputs, which helps review (see the comments about documentation above).
  6. The set of compiler warnings enabled by AX_COMPILER_FLAGS have been chosen to balance false positives against false negatives in detecting bugs in the code. Each compiler warning typically identifies a single bug in the code which would otherwise have to be fixed later in the life of the library — fixing bugs later is always more expensive in terms of debugging time.
  7. Usage examples are another form of documentation (as discussed above), which specifically make it clearer to a reviewer how a particular feature is intended to be used. In writing usage examples, the author of a patch can often notice awkwardnesses in their API design, which can then be fixed before review — this is faster than them being caught in review and sent back for modification.
  8. Well formatted code is a lot easier to read and review than poorly formatted code. It allows the reviewer to think about the function of the code they are reviewing, rather than (for example) which function call a given argument actually applies to, or which block of code a statement is actually part of.
  9. Example programs are a loose form of testing, and also act as usage examples and documentation for the feature (see above). They provide an easy way for the reviewer to test a feature, especially if it affects a UI or has some interactive element; this is very hard to do by simply looking at the code in a patch. Their biggest benefit will be when the feature is modified in future — the example programs can be used to test changes to the feature and ensure that its behaviour changes (or does not) as expected.
  10. For each unit test for a piece of code, the behaviour checked by that unit test can be guaranteed to be unchanged across modifications to the code in future. This prevents regressions (especially if the unit tests for the project are set up to be run automatically on each commit by a continuous integration system). The value of unit tests when initially implementing a feature is in the way they guide API design to be testable in the first place. It is often the case that an API will be written without unit tests, and later someone will try to add unit tests and find that the API is untestable; typically because it relies on internal state which the test harness cannot affect. By that point, the API is stable and cannot be changed to allow testing.
  11. Looking at code coverage reports is a good way to check that unit tests are actually checking what they are expected to check about the code. Code coverage provides a simple, coarse-grained metric of code quality — the quality of untested code is unknown.
  12. Every memory leak is a bug, and hence needs to be fixed at some point. Checking for memory leaks in a code review is a very mechanical, time-consuming process. If memory leaks can be detected automatically, by using valgrind on the unit tests, this reduces the amount of time needed to catch them during review. This is an area where higher code coverage provides immediate benefits. Another way to avoid leaks is to use g_autoptr() to automatically free memory when leaving a control block.
  13. If all automated tests are available, they can be run as part of system-wide integration tests, to check that the project behaviour doesn’t change when other system libraries (its dependencies) are changed. This is one of the motivations behind installed-tests. This is a one-time setup needed for your project, and once it’s set up, does not need to be done for each commit.
  14. make distcheck ensures that a tarball can be created successfully from the code, which entails building it, running all the unit tests, and checking that examples compile.
  15. If each patch is updated to learn from the results of previous patch reviews, the amount of time spent making and explaining repeated patch review comments should be significantly reduced, which saves everyone’s time.
  16. Commit messages are a form of documentation of the changes being made to a project. They should explain the motivation behind the changes, and clarify any design decisions which the author thinks the reviewer might question. If a commit message is inadequate, the reviewer is going to ask questions in the review which could have been avoided otherwise.

by Philip Withnall at August 12, 2016 07:22 AM

August 10, 2016

Jeremy Whiting

I'm here

It's been over a year since I posted anything. That's way too long. So what's going on in the projects I care about lately?
Speech-dispatcher will soon have a 0.8.5 release. There's a set of patches in the works on a branch on github that moves the audio to the server so we will be able to do useful things like label each pulse audio output with the client application name rather than a generic sd_espeak, sd_pico name for each output module. This way you'll see stuff like "Konsole speech", "Konversation speech" etc. volume controls for the speech volume control of the application's speech output. In order for this to work some other refactoring needs to be done in the espeak and other modules so stopping audio playback will be immediate etc.
QtSpeech has had some work done. The android and windows versions have been seeing some love lately. I'm optimistic that it can be included in an upcoming Qt release. Though I'm not sure why it hasn't been included yet.
KMouth has been waiting a QtSpeech release in order for it's kf5/qt5 branch to be merged to master.
KNewStuff could use some work. There was talk at a recent conference about adding the properties and such necessary for it to be used from QML. I'll follow up on that and see what has become of it.
All in all I feel like I haven't been around as much in the above projects as I'd like to have been. Life is busy and work is busy and such. I plan to spend a bit more time on these though. Even if it means I get slightly less sleep. Looking forward to a good rest of the year.

by Jeremy Whiting (noreply@blogger.com) at August 10, 2016 08:40 PM

July 26, 2016

Gustavo Padovan

Collabora contributions to Linux Kernel 4.7

Linux Kernel 4.7 was released this week with a total of 36 contributions from five Collabora engineers. It includes the first contributions from Helen as Collaboran and the first ever contributions on the kernel from Robert Foss. Here are some of the highlights of the work Collabora have done on Linux Kernel 4.7.

Enric added support for the Analogix anx78xx DRM Bridge and fixed two SD Card related issues on OMAP igep00x0: fix remove/insert detection and enable support to read the write-protect pin.

Gustavo de-staged the sync_file framework (Android Sync framework) that will be used to add explicit fencing support to the graphics pipeline and started a work to clean up usage of legacy vblank helpers.

Helen Koike created a separated module for the V4L2 Test Pattern Generator and fixed return errors on the pipeline validation code while Robert Foss improved the DRM documentation and fixed drm/vc4 (Raspberry Pi) when there is already a pending update when calling atomic_commit.

Tomeu fixed two Rockchip issues while working on the intel-gpu-tools support for other platforms.

Enric Balletbo i Serra (6):

Gustavo Padovan (22):

Helen Koike (3):

Robert Foss (3):

Tomeu Vizoso (2):

by Gustavo Padovan at July 26, 2016 03:19 PM

memcpy.io - Robert Foss

Linux kernel development shell scripts

Alt text

While upstreaming kernel patches scripts/checkpatch.pl and scripts/get_maintainer.pl often come in handy. But to me the interface they provide is slightly bulky and rely on using patch files instead of git commits, which to me is a bit inconvenient.

These scripts are all meant to be included in .bashrc or .zshrc

scripts/checkpatch.pl helper

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
#!/bin/bash
function checkpatch {
  if [ -z ${1+x} ]; then
    exec git diff | scripts/checkpatch.pl --no-signoff -q -
  elif [[ $1 == *"cache"* ]]; then
    exec git diff --cached | scripts/checkpatch.pl --no-signoff -q -
  else
    NUM_COMMITS=$1
    exec git diff HEAD~$NUM_COMMITS..HEAD | scripts/checkpatch.pl --no-signoff -q -
  fi
}

The checkpatch script simple wraps the patch creation process and allows you to right away specify which

Example

~/work/linux $ checkpatch 15
WARNING: ENOSYS means 'invalid syscall nr' and nothing else
#349: FILE: drivers/tty/serial/sh-sci.c:3026:
+   if (IS_ERR(sciport->gpios) && PTR_ERR(sciport->gpios) != -ENOSYS)

total: 0 errors, 1 warnings, 385 lines checked

In this example the 15 last commits are checked against scripts/checkpatch.pl for correctness.

scripts/get_maintainer.pl helper

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
#!/bin/bash
function get_maintainer {
  NUM_COMMITS=$1

  MAINTAINERS=$(git format-patch HEAD~$NUM_COMMITS..HEAD --stdout | scripts/get_maintainer.pl)

  # Remove extraneous stats
  MAINTAINERS=$(echo "$MAINTAINERS" | sed 's/(.*//g')

  # Remove names from email addresses
  MAINTAINERS=$(echo "$MAINTAINERS" | sed 's/.*<//g')

  # Remove left over character
  MAINTAINERS=$(echo "$MAINTAINERS" | sed 's/>//g')

  echo "$MAINTAINERS" | while read email; do
    echo -n "--to=${email}  ";
  done
}

Example

~/work/linux $ get_maintainer 1
--to=gregkh@linuxfoundation.org  --to=jslaby@suse.com  --to=linux-serial@vger.kernel.org  --to=linux-kernel@vger.kernel.org

~/work/linux $ git send-email -1 $(get_maintainer 1)

by Robert Foss at July 26, 2016 08:32 AM

June 30, 2016

Nicolas Dufresne

GStreamer Echo Canceller

For a long time I believed that echo cancellers had no place inside GStreamer. The theory was that GStreamer was too high level and would never be able to provide accurate enough delay information for any canceller to work. With fairly simple test, I could quickly confirm that the reported latency is often off by a period (generally 10ms). This isn’t strictly GStreamer’s fault and is not in any ways catastrophic for general playback experience.

With the apparition of WebRTC in browsers, it most likely became apparent that to be cross-platform browsers needed to have their own canceller. That’s exactly what happened in libWebRTC (former libjingle, used in both Firefox and Chrome to implement WebRTC). They implemented an echo canceller that accept an approximate delay and this changes everything for GStreamer.

At Collabora, I recently had the opportunity to implement this WebRTC Audio Processing based echo canceller. The main motivation was that the canceller on the hardware DSP we had didn’t work due to a hardware bug. A lot of those boards had been produced and no rework was possible. To save these boards, we decided to try with a software echo canceller. Even though it was using a fair amount of CPU, the experiment was a success. I have then clean-up the code and the new elements are now available in GStreamer Plugins Bad.

How does it work ?

Echo

The first step is to understand what is the echo. In a phone call with loud speaker, what happens is that your microphone records both your voice and the far end voices. The side effect, is that you are sending to the far end listeners both your voice along with a bad copy of their voices a moment before (the echo). To avoid this echo, you need to monitor the far end stream that you are playing back and “subtract” it from the recorded stream. In practice, it’s much more complex work, since the signal is deteriorated by the speaker and the microphone. You also need to figure-out the delays and hint the canceller, otherwise you may end-up with a terrible startup time or it may simply not work.

The implementation was greatly inspired from an experiment Olivier Crête did in 2008 using Speex DSP. I must admit, I never really understood his way of synchronizing the streams and literally ignored pretty much all the code that wasn’t GStreamer specific. The design works this way, you have a DSP element (webrtcdsp) that process the recorded stream and a probe element (webrtcechoprobe) that analyses the far end stream (before playing it back). Due to WebRTC library limitation, those two elements will transform the input buffer into chunk of 10ms. This is done using GstAdapter help. On the probe side, we push buffers in the adapter with timestamp transformed to running time. This time, plus the pipeline latency, gives us the moment in running time when the buffer should be heard by the microphone. We then synchronize the far end data against the recorded data and then let WebRTC Audio Processing library do it’s magic. A simple way of testing the element is by using an echo loop.

  gst-launch-1.0 pulsesrc ! webrtcdsp ! webrtcechoprobe ! pulsesink

Without the canceller, this pipeline would create a lot of echo, and probably end with loud feedback if your microphone volume is high enough. With the canceller, you should instead ear only one echo. It behaves a bit like a sound monitor but with too high latency and the side effect of fading in and out monotonic frequencies. After all, this is not what the algorithm have been design for. Try it in your real audio call application, that’s where you will most likely get the best results.

Before I conclude, there is a good reason why I called the element DSP rather then AEC. WebRTC Audio Processing is much more then just an echo canceller. In fact, it implements a wide variety of filters, noise suppressor, voice activity detection, etc. Currently we enable of subset of it, but I’m definitely looking forward enabling more (if not all) features from this library. I also encourage contributions. This works was only possible because of the great effort Arun Raghavan have put into extracting the echo canceller form the WebRTC project, create a standalone library usable by all. If you are interested about what cool feature could be added in the future, have a look at Arun’s blog about beamforming. And last one, thanks to my colleague who had to suffer me speaking with my computer listening to my echo for few weeks.

by Nicolas at June 30, 2016 10:14 PM

June 23, 2016

Frederic Plourde

SVVR2016

20160429_122440

I’ve been fortunate enough lately to attend the largest virtual reality professional event/conference : SVVR. This virtual reality conference’s been held each year in the Silicon Valley for 3 years now. This year, it showcased more than 100 VR companies on the exhibit floor and welcomed more than 1400 VR professionals and enthusiasts from all around the world. As a VR enthusiast myself, I attended the full 3-day conference and met most of the exhibitors and I’d like to summarize my thoughts, and the things I learned below, grouped under various themes. This post is by no means exhaustive and consists of my own, personal opinions.

CONTENT FOR VR

I realize that content creation for VR is really becoming the one area where most players will end up working. Hardware manufacturers and platform software companies are building the VR infrastructure as we speak (and it’s already comfortably usable), but as we move along and standards become more solid, I’m pretty sure we’re going to see lots and lots of new start-ups in the VR Content world, creating immersive games, 360 video contents, live VR events, etc… Right now, the realms of deployment possibilities for a content developer is not really elaborate. The vast majority of content creators are targeting the Unity3D plug-in, since it’s got built-in support for virtually all VR devices there is on the market like the Oculus family of headsets, HTC Vive, PlayStation VR, Samsung’s GearVR and even generic D3D or OpenGL-based applications on PC/Mac/Linux.

2 types of content

There really is two main types of VR content out there. First, 3D virtual artificially-generated content and 360 real-life captured content.

stereoVR

The former being what we usually refer to when thinking about VR, that is, computer-generated 3D worlds, e.g. in games, in which VR user can wander and interact. This is usually the kind of contents used in VR games, but also in VR applications, like Google’s great drawing app called TiltBrush (more info below). Click here to see a nice demo video!

newThe latter is everything that’s not generated but rather “captured” from real-life and projected or rendered in the VR space with the use of, most commonly, spherical projections and post-processing stitching and filtering. Simply said, we’re talking about 360 videos here (both 2D and 3D). Usually, this kind of contents does not let VR users interact with the VR world as “immersively” as the computer-generated 3D contents. It’s rather “played-back” and “replayed” just like regular online television series, for example, except for the fact that watchers can “look around”.

At SVVR2016, there were so many exhibitors doing VR content… Like InnerVision VR, Baobab Studios, SculptVR, MiddleVR, Cubicle ninjas, etc… on the computer-generated side, and Facade TV, VR Sports, Koncept VR, etc… on the 360 video production side.

TRACKING

Personally, I think tracking is by far the most important factor when considering the whole VR user experience. You have to actually try the HTC Vive tracking system to understand. The HTC Vive uses two “Lighthouse” camera towers placed in the room to let you track a larger space, something close to 15′ x 15′ ! I tried it a lot of times and tracking always seemed to keep pretty solid and constant. With the Vive you can literally walk in the VR space, zig-zag, leap and dodge without losing detection. On that front, I think competition is doing quite poorly. For example, Oculus’ CV1 is only tracking your movement from the front and the tracking angle  is pretty narrow… tracking was often lost when I faced away just a little… disappointing!

Talking about tracking, one of the most amazing talks was Leap Motion CTO David Holz’s demo of his brand new ‘Orion’, which is a truly impressive hand tracking camera with very powerful detection algos and very, very low latency. We could only “watch” David interact, but it looked so natural !  Check it out for yourself !

AUDIO

Audio is becoming increasingly crucial to the VR work flow since it adds so much to the VR experience. It is generally agreed in the VR community that awesome, well 3D-localised audio that seems “real” can add a lot of realism even to the visuals. At SVVR2016, there were a few audio-centric exhibitors like Ossic and Subpac. The former is releasing a kickstarter-funded 3D headset that lets you “pan” stereo audio content by rotating your head left-right. The latter is showcasing a complete body suit using tactile transducers and vibrotactile membranes to make you “feel” audio. The goal of this article is not to review specific technologies, but to discuss every aspects/domains part of the VR experience and, when it comes to audio, I unfortunately feel we’re still at the “3D sound is enough” level, but I believe it’s not.

See, proper audio 3D localization is a must of course. You obviously do no want to play a VR game where a dog appearing on your right is barking on your left!… nor do you want to have the impression a hovercraft is approaching up ahead when it’s actually coming from the back. Fortunately, we now have pretty good audio engines that correctly render audio coming from anywhere around you with good front/back discrimination. A good example of that is 3Dception from TwoBigEars. 3D specialization of audio channels is a must-have and yet, it’s an absolute minimum in my opinion. Try it for yourself ! Most of today’s VR games have coherent sound, spatially, but most of the time, you just do not believe sound is actually “real”. Why ?

Well, there are a number of reasons going from “limited audio diversity” (limited number of objects/details found in audio feed… like missing tiny air flows/sounds, user’s respiration or room’s ambient noise level) to limited  sound cancellation capability (ability to suppress high-pitched ambient sounds coming from the “outside” of the game) but I guess one of the most important factors is simply the way audio is recorded and rendered in our day-to day cheap stereo headset… A lot of promises is brought with binaural recording and stereo-to-binaural conversion algorithm. Binaural recording is a technique that records audio through two tiny omni microphones located under diaphragm structures resembling the human ears, so that audio is bounced back just like it is being routed through the human ears before reaching the microphones. The binaural audio experience is striking and the “stereo” feeling is magnified. It is very difficult to explain, you have to hear it for yourself. Talking about ear structure that has a direct impact on audio spectrum, I think one of the most promising techniques moving forward for added audio realism will be the whole geometry-based audio modeling field, where you can basically render sound as if it had actually been reflected on a computed-generated 3D geometry. Using such vrworks-audio-planmodels, a dog barking in front of a tiled metal shed will sound really differently than the same dog barking near a wooden chalet. The brain does pick up those tiny details and that’s why you find guys like Nvidia releasing their brand new “Physically Based Acoustic Simulator Engine” in VrWorks.

HAPTICS

Haptics is another very interesting VR domain that consists of letting users perceive virtual objects not through visual nor aural channels, but through touch. Usually, this sense of touch in VR experience is brought in by the use of special haptic wands that, using force feedback and other technologies, make you think that you are actually really pushing an object in the VR world.

You mostly find two types of haptic devices out there. Wand-based and Glove-based. Gloves for haptics are of course more natural to most users. It’s easy to picture yourself in a VR game trying to “feel” rain drops falling on your fingers or in an flight simulator, pushing buttons and really feeling them. However, by talking to many exhibitors at SVVR, it seems we’ll be stuck at the “feel button pushes” level for quite some time, as we’re very far from being able to render “textures” since spatial resolutions involved would simply be too high for any haptic technology that’s currently available. There are some pretty cool start-ups with awesome glove-based haptic technologies like Kickstarter-funded Neurodigital Technologies GloveOne or Virtuix’s Hands Omni.

Now, I’m not saying wand-based haptic technologies are outdated and not promising. In fact, I think they are more promising than gloves for any VR application that relies on “tools” like a painting app requiring you to use a brush or a remote-surgery medical application requiring you to use an actual scalpel ! When it comes to wands, tools and the like, the potential for haptic feedback is multiplied because you simply have more room to fit more actuators and gyros. I once tried an arm-based 3D joystick in a CAD application and I could swear I was really hitting objects with my design tool…  it was stunning !

SOCIAL

If VR really takes off in the consumer mass market someday soon, it will most probably be social. That’s something I heard at SVVR2016 (paraphrased) in the very interesting talk by David Baszucki titled : “Why the future of VR is social”. I mean, in essence, let’s just take a look at current technology appropriation nowadays and let’s just acknowledge that the vast majority of applications rely on the “social” aspect, right ? People want to “connect”, “communicate” and “share”. So when VR comes around, why would it be suddenly different? Of course, gamers will want to play really immersive VR games and workers will want to use VR in their daily tasks to boost productivity, but most users will probably want to put on their VR glasses to talk to their relatives, thousands of miles away, as if they were sitting in the same room. See ? Even the gamers and the workers I referred to above will want to play or work “with other real people”. No matter how you use VR, I truly believe the social factor will be one of the most important ones to consider to build successful software. At SVVR 2016, I discovered a very interesting start-up that focused on the social VR experience. With mimesys‘s telepresence demo, using a HTC Vive controller, they had me collaborate on a painting with a “real” guy hooked to the same system, painting from his home apartment in France, some 9850 km away and I had a pretty good sense of his “presence”. The 3D geometry and rendered textures were not perfect, but it was good enough to have a true collaboration experience !

MOVING FORWARD

We’re only at the very beginning of this very exciting journey through Virtual Reality and it’s really difficult to predict what VR devices will look like in only 3-5 years from now because things are just moving so quickly… An big area I did not cover in my post and that will surely change of lot of parameters moving forward in the VR world is… AR – Augmented Reality:) Check out what’s MagicLeap‘s up to these days !

 

 


by fredinfinite23 at June 23, 2016 12:04 PM

June 22, 2016

Philip Withnall

GTK+ hackfest 2016

A dozen GNOME hackers invaded the Red Hat office in Toronto last week, to spend four days planning the next year of work on our favourite toolkit, GTK+; and to think about how Flatpak applications can best integrate with the rest of the desktop.

What did we do?

  • Worked out an approach for versioning GTK+ in future, to improve the balance between stability and speed of development. This has turned into a wiki page.
  • I demoed Dunfell and added support for visualising GTasks to it. I don’t know how much time I will have for it in the near future, so help and feedback are welcome.
  • There was a detailed discussion of portals for Flatpak, including lots of use cases, and the basics of a security design were decided which allows the most code reuse while also separating functionality. Simon has written more about this.
  • I missed some of the architectural discussion about the future of GTK+ (including moving some classes around, merging some things and stripping out some outdated things), but I believe Benjamin had useful discussions with people about it.
  • Allan, Philip, Mike and I looked at using hotdoc for developer.gnome.org, and possible layouts for a new version of the site. Christian spent some time thinking about integration of documentation into GNOME Builder.
  • Allison did a lot of blogging, and plotted with Alex to add some devious new GVariant functionality to make everyone’s lives easier when writing parsers — I’ll leave her to blog about it.

Thanks to Collabora for sending me along to take part!

After the hackfest, I spent a few days exploring Toronto, and as a result ended up very sunburned.

Busy at work in the hackfest room. Totem pole in the Royal Ontario Museum. The backstory for this one was trippy. The distillery district.

by Philip Withnall at June 22, 2016 05:42 PM

June 20, 2016

Simon McVittie

GTK Hackfest 2016

I'm back from the GTK hackfest in Toronto, Canada and mostly recovered from jetlag, so it's time to write up my notes on what we discussed there.

Despite the hackfest's title, I was mainly there to talk about non-GUI parts of the stack, and technologies that fit more closely in what could be seen as the freedesktop.org platform than they do in GNOME. In particular, I'm interested in Flatpak as a way to deploy self-contained "apps" in a freedesktop-based, sandboxed runtime environment layered over the Universal Operating System and its many derivatives, with both binary and source compatibility with other GNU/Linux distributions.

I'm mainly only writing about discussions I was directly involved in: lots of what sounded like good discussion about the actual graphics toolkit went over my head completely :-) More notes, mostly from Matthias Clasen, are available on the GNOME wiki.

In no particular order:

Thinking with portals

We spent some time discussing Flatpak's portals, mostly on Tuesday. These are the components that expose a subset of desktop functionality as D-Bus services that can be used by contained applications: they are part of the security boundary between a contained app and the rest of the desktop session. Android's intents are a similar concept seen elsewhere. While the portals are primarily designed for Flatpak, there's no real reason why they couldn't be used by other app-containment solutions such as Canonical's Snap.

One major topic of discussion was their overall design and layout. Most portals will consist of a UX-independent part in Flatpak itself, together with a UX-specific implementation of any user interaction the portal needs. For example, the portal for file selection has a D-Bus service in Flatpak, which interacts with some UX-specific service that will pop up a standard UX-specific "Open" dialog — for GNOME and probably other GTK environments, that dialog is in (a branch of) GTK.

A design principle that was reiterated in this discussion is that the UX-independent part should do as much as possible, with the UX-specific part only carrying out the user interactions that need to comply with a particular UX design (in the GTK case, GNOME's design). This minimizes the amount of work that needs to be redone for other desktop or embedded environments, while still ensuring that the other environments can have their chosen UX design. In particular, it's important that, as much as possible, the security- and performance-sensitive work (such as data transport and authentication) is shared between all environments.

The aim is for portals to get the user's permission to carry out actions, while keeping it as implicit as possible, avoiding an "are you sure?" step where feasible. For example, if an application asks to open a file, the user's permission is implicitly given by them selecting the file in the file-chooser dialog and pressing OK: if they do not want this application to open a file at all, they can deny permission by cancelling. Similarly, if an application asks to stream webcam data, the UX we expect is for GNOME's Cheese app (or a similar non-GNOME app) to appear, open the webcam to provide a preview window so they can see what they are about to send, but not actually start sending the stream to the requesting app until the user has pressed a "Start" button. When defining the API "contracts" to be provided by applications in that situation, we will need to be clear about whether the provider is expected to obtain confirmation like this: in most cases I would anticipate that it is.

One security trade-off here is that we have to have a small amount of trust in the providing app. For example, continuing the example of Cheese as a webcam provider, Cheese could (and perhaps should) be a contained app itself, whether via something like Flatpak, an LSM like AppArmor or both. If Cheese is compromised somehow, then whenever it is running, it would be technically possible for it to open the webcam, stream video and send it to a hostile third-party application. We concluded that this is an acceptable trade-off: each application needs to be trusted with the privileges that it needs to do its job, and we should not put up barriers that are easy to circumvent or otherwise serve no purpose.

The main (only?) portal so far is the file chooser, in which the contained application asks the wider system to show an "Open..." dialog, and if the user selects a file, it is returned to the contained application through a FUSE filesystem, the document portal. The reference implementation of the UX for this is in GTK, and is basically a GtkFileChooserDialog. The intention is that other environments such as KDE will substitute their own equivalent.

Other planned portals include:

  • image capture (scanner/camera)
  • opening a specified URI
    • this needs design feedback on how it should work for non-http(s)
  • sharing content, for example on social networks (like Android's Sharing menu)
  • proxying joystick/gamepad input (perhaps via Wayland or FUSE, or perhaps by modifying libraries like SDL with a new input source)
  • network proxies (GProxyResolver) and availability (GNetworkMonitor)
  • contacts/address book, probably vCard-based
  • notifications, probably based on freedesktop.org Notifications
  • video streaming (perhaps using Pinot, analogous to PulseAudio but for video)

Environment variables

GNOME on Wayland currently has a problem with environment variables: there are some traditional ways to set environment variables for X11 sessions or login shells using shell script fragments (/etc/X11/Xsession.d, /etc/X11/xinit/xinitrc.d, /etc/profile.d), but these do not apply to Wayland, or to noninteractive login environments like cron and systemd --user. We are also keen to avoid requiring a Turing-complete shell language during session startup, because it's difficult to reason about and potentially rather inefficient.

Some uses of environment variables can be dismissed as unnecessary or even unwanted, similar to the statement in Debian Policy §9.9: "A program must not depend on environment variables to get reasonable defaults." However, there are two common situations where environment variables can be necessary for proper OS integration: search-paths like $PATH, $XDG_DATA_DIRS and $PYTHONPATH (particularly necessary for things like Flatpak), and optionally-loaded modules like $GTK_MODULES and $QT_ACCESSIBILITY where a package influences the configuration of another package.

There is a stopgap solution in GNOME's gdm display manager, /usr/share/gdm/env.d, but this is gdm-specific and insufficiently expressive to provide the functionality needed by Flatpak: "set XDG_DATA_DIRS to its specified default value if unset, then add a couple of extra paths".

pam_env comes closer — PAM is run at every transition from "no user logged in" to "user can execute arbitrary code as themselves" — but it doesn't support .d fragments, which are required if we want distribution packages to be able to extend search paths. pam_env also turns off per-user configuration by default, citing security concerns.

I'll write more about this when I have a concrete proposal for how to solve it. I think the best solution is probably a PAM module similar to pam_env but supporting .d directories, either by modifying pam_env directly or out-of-tree, combined with clarifying what the security concerns for per-user configuration are and how they can be avoided.

Relocatable binary packages

On Windows and OS X, various GLib APIs automatically discover where the application binary is located and use search paths relative to that; for example, if C:\myprefix\bin\app.exe is running, GLib might put C:\myprefix\share into the result of g_get_system_data_dirs(), so that the application can ask to load app/data.xml from the data directories and get C:\myprefix\share\app\data.xml. We would like to be able to do the same on Linux, for example so that the apps in a Flatpak or Snap package can be constructed from RPM or dpkg packages without needing to be recompiled for a different --prefix, and so that other third-party software packages like the games on Steam and gog.com can easily locate their own resources.

Relatedly, there are currently no well-defined semantics for what happens when a .desktop file or a D-Bus .service file has Exec=./bin/foo. The meaning of Exec=foo is well-defined (it searches $PATH) and the meaning of Exec=/opt/whatever/bin/foo is obvious. When this came up in D-Bus previously, my assertion was that the meaning should be the same as in .desktop files, whatever that is.

We agreed to propose that the meaning of a non-absolute path in a .desktop or .service file should be interpreted relative to the directory where the .desktop or .service file was found: for example, if /opt/whatever/share/applications/foo.desktop says Exec=../../bin/foo, then /opt/whatever/bin/foo would be the right thing to execute. While preparing a mail to the freedesktop and D-Bus mailing lists proposing this, I found that I had proposed the same thing almost 2 years ago... this time I hope I can actually make it happen!

Flatpak and OSTree bug fixing

On the way to the hackfest, and while the discussion moved to topics that I didn't have useful input on, I spent some time fixing up the Debian packaging for Flatpak and its dependencies. In particular, I did my first upload as a co-maintainer of bubblewrap, uploaded ostree to unstable (with the known limitation that the grub, dracut and systemd integration is missing for now since I haven't been able to test it yet), got most of the way through packaging Flatpak 0.6.5 (which I'll upload soon), cherry-picked the right patches to make ostree compile on Debian 8 in an effort to make backports trivial, and spent some time disentangling a flatpak test failure which was breaking the Debian package's installed-tests. I'm still looking into ostree test failures on little-endian MIPS, which I was able to reproduce on a Debian porterbox just before the end of the hackfest.

OSTree + Debian

I also had some useful conversations with developers from Endless, who recently opened up a version of their OSTree build scripts for public access. Hopefully that information brings me a bit closer to being able to publish a walkthrough for how to deploy a simple Debian derivative using OSTree (help with that is very welcome of course!).

GTK life-cycle and versioning

The life-cycle of GTK releases has already been mentioned here and elsewhere, and there are some interesting responses in the comments on my earlier blog post.

It's important to note that what we discussed at the hackfest is only a proposal: a hackfest discussion between a subset of the GTK maintainers and a small number of other GTK users (I am in the latter category) doesn't, and shouldn't, set policy for all of GTK or for all of GNOME. I believe the intention is that the GTK maintainers will discuss the proposals further at GUADEC, and make a decision after that.

As I said before, I hope that being more realistic about API and ABI guarantees can avoid GTK going too far towards either of the possible extremes: either becoming unable to advance because it's too constrained by compatibility, or breaking applications because it isn't constrained enough. The current situation, where it is meant to be compatible within the GTK 3 branch but in practice applications still sometimes break, doesn't seem ideal for anyone, and I hope we can do better in future.

Acknowledgements

Thanks to everyone involved, particularly:

  • Matthias Clasen, who organised the hackfest and took a lot of notes
  • Allison Lortie, who provided on-site cat-herding and led us to some excellent restaurants
  • Red Hat Inc., who provided the venue (a conference room in their Toronto office), snacks, a lot of coffee, and several participants
  • my employers Collabora Ltd., who sponsored my travel and accomodation

June 20, 2016 06:37 PM

June 15, 2016

Andrew Shadura

Migrate to systemd without a reboot

Yesterday I was fixing an issue with one of the servers behind kallithea-scm.org: the hook intended to propagage pushes from Our Own Kallithea to Bitbucket stopped working. Until yesterday, that server was using Debian’s flavour of System V init and djb’s dæmontools to keep things running. To make the hook asynchronous, I wrote a service to be managed to dæmontools, so that concurrency issued would be solved by it. However, I didn’t implement any timeouts, so when last week wget froze while pulling Weblate’s hook, there was nothing to interrupt it, so the hook stopped working since dæmontools thought it’s already running and wouldn’t re-trigger it. Killing wget helped, but I decided I need to do something with it to prevent the situation from happening in the future.

I’ve been using systemd at work for the last year, so I am now confident I’m happier with systemd than with dæmontools, so I decided to switch the server to systemd. Not surprisingly, I prepared unit files in about 5 minutes without having to look into the manuals again, while with dæmontools I had to check things every time I needed to change something. The tricky thing was the switch itself. It is a virtual server, presumably running in Xen, and I don’t have access to the console, so if I bork break something, I need to summon Bradley Kuhn or someone from Conservancy, who’s kindly donated the server to the project. In any case, I decided to attempt to upgrade without a reboot, so that I have more options to roll back my changes in the case things go wrong.

After studying the manpages of both systemd’s init and sysvinit’s init, I realised I can install systemd as /sbin/init and ask already running System V init to re-exec. However, systemd’s init can’t talk to System V init, so before installing systemd I made a backup on it. It’s also important to stop all running services (except probably ssh) to make sure systemd doesn’t start second instances of each. And then: /tmp/init u — and we’re running systemd! A couple of additional checks, and it’s safe to reboot.

Only when I did all that I realised that in the case of systemd not working I’d probably not be able to undo my changes if my connection interrupted. So, even though at the end it worked, probably it’s not a good idea to perform such manipulations when you don’t have an alternative way to connect to the server :)

June 15, 2016 11:51 AM

June 14, 2016

Simon McVittie

GTK versioning and distributions

Allison Lortie has provoked a lot of comment with her blog post on a new proposal for how GTK is versioned. Here's some more context from the discussion at the GTK hackfest that prompted that proposal: there's actually quite a close analogy in how new Debian versions are developed.

The problem we're trying to address here is the two sides of a trade-off:

  • Without new development, a library (or an OS) can't go anywhere new
  • New development sometimes breaks existing applications

Historically, GTK has aimed to keep compatible within a major version, where major versions are rather far apart (GTK 1 in 1998, GTK 2 in 2002, GTK 3 in 2011, GTK 4 somewhere in the future). Meanwhile, fixing bugs, improving performance and introducing new features sometimes results in major changes behind the scenes. In an ideal world, these behind-the-scenes changes would never break applications; however, the world isn't ideal. (The Debian analogy here is that as much as we aspire to having the upgrade from one stable release to the next not break anything at all, I don't think we've ever achieved that in practice - we still ask users to read the release notes, even though ideally that wouldn't be necessary.)

In particular, the perceived cost of doing a proper ABI break (a fully parallel-installable GTK 4) means there's a strong temptation to make changes that don't actually remove or change C symbols, but are clearly an ABI break, in the sense that an application that previously worked and was considered correct no longer works. A prominent recent example is the theming changes in GTK 3.20: the ABI in terms of functions available didn't change, but what happens when you call those functions changed in an incompatible way. This makes GTK hard to rely on for applications outside the GNOME release cycle, which is a problem that needs to be fixed (without stopping development from continuing).

The goal of the plan we discussed today is to decouple the latest branch of development, which moves fast and sometimes breaks API, from the API-stable branches, which only get bug fixes. This model should look quite familiar to Debian contributors, because it's a lot like the way we release Debian and Ubuntu.

In Debian, at any given time we have a development branch (testing/unstable) - currently "stretch", the future Debian 9. We also have some stable branches, of which the most recent are Debian 8 "jessie" and Debian 7 "wheezy". Different users of Debian have different trade-offs that lead them to choose one or the other of these. Users who value stability and want to avoid unexpected changes, even at a cost in terms of features and fixes for non-critical bugs, choose to use a stable release, preferably the most recent; they only need to change what they run on top of Debian for OS API changes (for instance webapps, local scripts, or the way they interact with the GUI) approximately every 2 years, or perhaps less often than that with the Debian-LTS project supporting non-current stable releases. Meanwhile, users who value the latest versions and are willing to work with a "moving target" as a result choose to use testing/unstable.

The GTK analogy here is really quite close. In the new versioning model, library users who value stability over new things would prefer to use a stable-branch, ideally the latest; library users who want the latest features, the latest bug-fixes and the latest new bugs would use the branch that's the current focus of development. In practice we expect that the latter would be mostly GNOME projects. There's been some discussion at the hackfest about how often we'd have a new stable-branch: the fastest rate that's been considered is a stable-branch every 2 years, similar to Ubuntu LTS and Debian, but there's no consensus yet on whether they will be that frequent in practice.

How many stable versions of GTK would end up shipped in Debian depends on how rapidly projects move from "old-stable" to "new-stable" upstream, how much those projects' Debian maintainers are willing to patch them to move between branches, and how many versions the release team will tolerate. Once we reach a steady state, I'd hope that we might have 1 or 2 stable-branched versions active at a time, packaged as separate parallel-installable source packages (a lot like how we handle Qt). GTK 2 might well stay around as an additional active version just from historical inertia. The stable versions are intended to be fully parallel-installable, just like the situation with GTK 1.2, GTK 2 and GTK 3 or with the major versions of Qt.

For the "current development" version, I'd anticipate that we'd probably only ship one source package, and do ABI transitions for one version active at a time, a lot like how we deal with libgnome-desktop and the evolution-data-server family of libraries. Those versions would have parallel-installable runtime libraries but non-parallel-installable development files, again similar to libgnome-desktop.

At the risk of stretching the Debian/Ubuntu analogy too far, the intermediate "current development" GTK releases that would accompany a GNOME release are like Ubuntu's non-LTS suites: they're more up to date than the fully stable releases (Ubuntu LTS, which has a release schedule similar to Debian stable), but less stable and not supported for as long.

Hopefully this plan can meet both of its goals: minimize breakage for applications, while not holding back the development of new APIs.

June 14, 2016 01:56 AM

June 06, 2016

Helen Koike

[How to] Speed up compilation time with Icecc

With Icecc (Icecream) you can use other machines in your local network to compile for you.

If you have a single machine, usually you would do (for a quad-core machine) something like:

make -j4

This command will generate four compilation jobs and distribute to your CPU cores, compiling the jobs in parallel.

But if you have another machine in your local network, Icecc let you use the cores of this other machine too. If this other machine is dual core, you could run:

make -j6

How it works?


When you call make -jN, instead of calling the classic GNU Gcc, we will "trick" the make so it will call another "Gcc" binary defined by the Icecc (by changing the PATH).

The make command will generate the jobs and call the Icecc Gcc that will send the source files to the scheduler that will forward the jobs to the remote machines (or to him self or to the machine who started the compilation).



How to setup the network?


Easy on Ubuntu:

* Do the following commands in every computer in the network:

$ sudo apt-get install icecc

$ export PATH=/usr/lib/icecc/bin:$PATH

Check if the gcc in the /usr/lib/icecc is being used:

$ which gcc
/usr/lib/icecc/bin/gcc

Let say that the IP address of the machine you chose to be the scheduler is 192.168.0.34. Edit the file /etc/icecc/icecc.conf and change the follow variables (still in all the machines in the network):

ICECC_NETNAME="icecc_net"
ICECC_ALLOW_REMOTE="yes"
ICECC_SCHEDULER_HOST="192.168.0.34"

Reset the Icecc Deamon

sudo service iceccd restart

* Do the following command in the the scheduler machine 192.168.0.34:

sudo service icecc-scheduler start

How can I know if it works?


Install and Run the monitor:

$ sudo apt-get install icecc icecc-monitor

$ icemon -n icecc_net

You should see all machines and an indicator saying that the network is online:


In this case I have 3 machines, the first two have four cores and the last one just one core.

When I compile something with make -j9 I see the Jobs number growing and the slots being filled.

Done!!!

CCache with Icecc (edited):

To  speed up even more your compilation time, you can setup CCache (explained in the last post).

The general ideia is: check in a local cache first (using CCache) if the source files have been already compiled, if not, then give the job to Icecc.

When using with CCache, you don't need to add Icecc in the PATH, we use CCACHE_PREFIX instead:

$ export CCACHE_PREFIX=icecc

$ echo $PATH
/usr/lib/ccache:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games

$ which gcc
/usr/lib/ccache/gcc

by Helen Fornazier (noreply@blogger.com) at June 06, 2016 01:13 AM

[How to] Speed up compilation time with CCache

CCache stores the compiled version of your source files into a cache. If you tries to compile the same source files again, you will get a cache hit and will retrieve the compiled objects from the cache instead of compiling them again.

How it works?


 Instead of calling the GNU Gcc compiler directly, you will point your PATH to another Gcc binary provided by CCache that will check the cache first and them call the GNU Gcc if necessary:



How to install it?


Easy on Ubuntu:

$ sudo apt-get install ccache

Then change you PATH to point to the Ccache gcc (not just gcc) version:

$ export PATH="/usr/lib/ccache:$PATH"

Check with:

$ which gcc
/usr/lib/ccache/gcc

Done!!!

How can I know if it works? 


You can re-compile something and check if CCache is working with ccache -s command, you should have some cache hits:

You can increase/decrease your cache size with:

$ ccache --max-size=5G


Troubleshooting: Re-compiling Linux kernel never hit the cache?

Check if the flag CONFIG_LOCALVERSION_AUTO is set in you menuconfig, disable it and try again.
This flag seems to change a core header file as it appends the git version to the version string automatically, forcing CCache to recompile almost it all.


CCache with Icecc (edited):

If you want to use CCache with Icecc (that I'll explain about it in another post) to speed up even more your compilation time, use CCACHE_PREFIX=icecc (thanks Joel Rosdahl who commented about this)

$ export CCACHE_PREFIX=icecc

NOTE: You don't need to add icecc /usr/lib/icecc/bin in your PATH

by Helen Fornazier (noreply@blogger.com) at June 06, 2016 01:07 AM

June 05, 2016

Simon McVittie

Flatpak in Debian

Quite a lot has happened in xdg-app since last time I blogged about it. Most noticeably, it isn't called xdg-app any more, having been renamed to Flatpak. It is now available in Debian experimental under that name, and the xdg-app package that was briefly there has been removed. I'm currently in the process of updating Flatpak to the latest version 0.6.4.

The privileged part has also spun off into a separate project, Bubblewrap, which recently had its first release (0.1.0). This is intended as a common component with which unprivileged users can start a container in a way that won't let them escalate privileges, like a more flexible version of linux-user-chroot.

Bubblewrap has also been made available in Debian, maintained by Laszlo Boszormenyi (also maintainer of linux-user-chroot). Yesterday I sent a patch to update Laszlo's packaging for 0.1.0. I'm hoping to become a co-maintainer to upload that myself, since I suspect Flatpak and Bubblewrap might need to track each other quite closely. For the moment, Flatpak still uses its own internal copy of Bubblewrap, but I consider that to be a bug and I'd like to be able to fix it soon.

At some point I also want to experiment with using Bubblewrap to sandbox some of the game engines that are packaged in Debian: networked games are a large attack surface, and typically consist of the sort of speed-optimized C or C++ code that is an ideal home for security vulnerabilities. I've already made some progress on jailing game engines with AppArmor, but making sensitive files completely invisible to the game engine seems even better than preventing them from being opened.

Next weekend I'm going to be heading to Toronto for the GTK Hackfest, primarily to to talk to GNOME and Flatpak developers about their plans for sandboxing, portals and Flatpak. Hopefully we can make some good progress there: the more I know about the state of software security, the less happy I am with random applications all being equally privileged. Newer display technologies like Wayland and Mir represent an opportunity to plug one of the largest holes in typical application containerization, which is a major step in bringing sandboxes like Flatpak and Snap from proof-of-concept to a practical improvement in security.

Other next steps for Flatpak in Debian:

  • To get into the next stable release (Debian 9), Flatpak needs to move from experimental into unstable and testing. I've taken the first step towards that by uploading libgsystem to unstable. Before Flatpak can follow, OSTree also needs to move.
  • Now that it's in Debian, please report bugs in the usual Debian way or send patches to fix bugs: Flatpak, OSTree, libgsystem.
  • In particular, there are some OSTree bugs tagged help. I'd appreciate contributions to the OSTree packaging from people who are interested in using it to deploy dpkg-based operating systems - I'm primarily looking at it from the Flatpak perspective, so the boot/OS side of it isn't so well tested. Red Hat have rpm-ostree, and I believe Endless do something analogous to build OS images with dpkg, but I haven't had a chance to look into that in detail yet.
  • Co-maintainers for Flatpak, OSTree, libgsystem would also be very welcome.

June 05, 2016 11:24 AM

June 03, 2016

memcpy.io - Robert Foss

Running Weston on a Raspbian

Alt text

Progress in the VC4 graphics camp and the Wayland camp now enables us to run Weston on top of the drm backend for VC4 platforms. Previously software acceleration using pixman was needed, but this is no longer the case.

Additionally the rpi backend for weston is now being removed since it has been obsoleted by the improved drm layer.

Let's explore running hardware accelerated Weston on the Raspberry Pi.

Building Linux kernel

A comprehensive guide for building a recent Linux kernel for Raspberry Pi boards has been written by the Raspberry Pi foundation and is available here.

As of this writing the guide helps you build a v4.4 kernel which is good enough for our purposes.

Set up alternative install location

These build instructions are based on the Wayland instructions from freedesktop.org, but altered to target VC4 and Raspbian.

You probably don't want to install experimental builds of software among the usual software of your operating system, so let's define a prefix for where to install our builds.

# Change WLD to any location you like
export WLD=~/local
export LD_LIBRARY_PATH=$WLD/lib
export PKG_CONFIG_PATH=$WLD/lib/pkgconfig/:$WLD/share/pkgconfig/
export PATH=$WLD/bin:$PATH
export ACLOCAL_PATH=$WLD/share/aclocal
export ACLOCAL="aclocal -I $ACLOCAL_PATH"

# Needed by autotools
mkdir -p $WLD/share/aclocal

Installing dependencies

Start by installing the build dependencies of mesa, weston and wayland.

# Enable source packages
sudo sed -e "s/#\sdeb-src/deb-src/g" -i /etc/apt/sources.list
sudo apt update

The above step can alternatively be completed using the GUI of your package manager, by enabling source packages.

# Install build dependencies of mesa
sudo apt-get build-dep mesa

# Install build dependencies of wayland/weston
sudo apt-get install \
  libevdev libevdev-dev \
  libwacom libwacom-dev \
  libxkbcommon libxkbcommon-dev

Building Mesa

Configure and compile mesa with vc4, wayland and EGL support.

git clone git://anongit.freedesktop.org/mesa/mesa
cd mesa
./autogen.sh --prefix=$WLD \
  --enable-gles2 \
  --with-egl-platforms=x11,wayland,drm \
  --enable-gbm --enable-shared-glapi \
  --with-gallium-drivers=vc4 \
  --without-dri-drivers \
  --disable-va \
  --disable-vdpau \
  --disable-xvmc \
  --disable-omx
make -j4 && make install

Building Weston and dependencies

Weston and Wayland have a number of dependencies that also need to be fetched and built.

Wayland

Weston is a Wayland compositor, so we're going to have to build Wayland.

git clone git://anongit.freedesktop.org/wayland/wayland
cd wayland
./autogen.sh --prefix=$WLD
make -j4 && make install
cd ..

git clone git://anongit.freedesktop.org/wayland/wayland-protocols
cd wayland-protocols
./autogen.sh --prefix=$WLD
make install
cd ..

libinput

libinput is a dependency of Wesron, handles input devices like keyboards, touchpads and mice.

git clone git://anongit.freedesktop.org/wayland/libinput
cd libinput
./autogen.sh --prefix=$WLD
make -j4 && make install
cd ..

Weston

Finally we've built all of the dependencies of Weston and can now build it.

git clone git://anongit.freedesktop.org/wayland/weston
cd weston
./autogen.sh --prefix=$WLD \
  --disable-libunwind
make -j4 &&
sudo make install
cd ..

Running Weston

That wasn't so bad, it took a little while, but now we're ready to start Weston. Now, let's fire up a (virtual) terminal. Make sure that you're not running an X terminal, ssh terminal or serial terminal.

Running weston in this way depends on logind.

# Make sure that $DISPLAY is unset.
unset DISPLAY

# And that $XDG_RUNTIME_DIR has been set and created.
if test -z "${XDG_RUNTIME_DIR}"; then
  export XDG_RUNTIME_DIR=/tmp/${UID}-runtime-dir
  if ! test -d "${XDG_RUNTIME_DIR}"; then
    mkdir "${XDG_RUNTIME_DIR}"
    chmod 0700 "${XDG_RUNTIME_DIR}"1
  fi
fi

# Run weston:
weston

Try weston applications

Now that we're running weston, let's try some applications. They're located in the top level directory of weston.

  • weston-terminal
  • weston-flower
  • weston-gears
  • weston-smoke
  • weston-image
  • weston-view
  • weston-resizor
  • weston-eventdemo

When you've started all of your favorite applications you can grab a screenshot by pressing Super + s, which will save wayland-screenshot.png in your home directory.

by Robert Foss at June 03, 2016 08:32 AM

Linux kernel development shell scripts

Alt text

While upstreaming kernel patches scripts/checkpatch.pl and scripts/get_maintainer.pl often come in handy. But to me the interface they provide is slightly bulky and rely on using patch files instead of git commits, which to me is a bit inconvenient.

These scripts are all meant to be included in .bashrc or .zshrc

scripts/checkpatch.pl helper

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
#!/bin/bash
function checkpatch {
  if [ -z ${1+x} ]; then
    exec git diff | scripts/checkpatch.pl --no-signoff -q -
  elif [[ $1 == *"cache"* ]]; then
    exec git diff --cached | scripts/checkpatch.pl --no-signoff -q -
  else
    NUM_COMMITS=$1
    exec git diff HEAD~$NUM_COMMITS..HEAD | scripts/checkpatch.pl --no-signoff -q -
  fi
}

The checkpatch script simple wraps the patch creation process and allows you to right away specify which

Example

~/work/linux $ checkpatch 15
WARNING: ENOSYS means 'invalid syscall nr' and nothing else
#349: FILE: drivers/tty/serial/sh-sci.c:3026:
+   if (IS_ERR(sciport->gpios) && PTR_ERR(sciport->gpios) != -ENOSYS)

total: 0 errors, 1 warnings, 385 lines checked

In this example the 15 last commits are checked against scripts/checkpatch.pl for correctness.

scripts/get_maintainer.pl helper

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
#!/bin/bash
function get_maintainer {
  NUM_COMMITS=$1

  MAINTAINERS=$(git format-patch HEAD~$NUM_COMMITS..HEAD --stdout | scripts/get_maintainer.pl)

  # Remove extraneous stats
  MAINTAINERS=$(echo "$MAINTAINERS" | sed 's/(.*//g')

  # Remove names from email addresses
  MAINTAINERS=$(echo "$MAINTAINERS" | sed 's/.*<//g')

  # Remove left over character
  MAINTAINERS=$(echo "$MAINTAINERS" | sed 's/>//g')

  echo "$MAINTAINERS" | while read email; do
    echo -n "--to=${email}  ";
  done
}

Example

~/work/linux $ get_maintainer 1
--to=gregkh@linuxfoundation.org  --to=jslaby@suse.com  --to=linux-serial@vger.kernel.org  --to=linux-kernel@vger.kernel.org

~/work/linux $ git send-email -1 $(get_maintainer 1)

by Robert Foss at June 03, 2016 08:32 AM

May 25, 2016

Olivier Crête

GStreamer Spring Hackfest 2016

After missing the last few GStreamer hackfests I finally managed to attend this time. It was held in Thessaloniki, Greece’s second largest city. The city is located by the sea side and the entire hackfest and related activities were either directly by the sea or just a couple blocks away.

Collabora was very well represented, with Nicolas, Mathieu, Lubosz also attending.

Nicolas concentrated his efforts on making kmssink and v4l2dec work together to provide zero-copy decoding and display on a Exynos 4 board without a compositor or other form of display manager. Expect a blog post soon  explaining how to make this all fit together.

Lubosz showed off his VR kit. He implemented a viewer for planar point clouds acquired from a Kinect. He’s working on a set of GStreamer plugins to play back spherical videos. He’s also promised to blog about all this soon!

Mathieu started the hackfest by investigating the intricacies of Albanian customs, then arrived on the second day in Thessaloniki and hacked on hotdoc, his new fancy documentation generation tool. He’ll also be posting a blog about it, however in the meantime you can read more about it here.

As for myself, I took the opportunity to fix a couple GStreamer bugs that really annoyed me. First, I looked into bug #766422: why glvideomixer and compositor didn’t work with RTSP sources. Then I tried to add a ->set_caps() virtual function to GstAggregator, but it turns out I first needed to delay all serialized events to the output thread to get predictable outcomes and that was trickier than expected. Finally, I got distracted by a bee and decided to start porting the contents of docs.gstreamer.com to Markdown and updating it to the GStreamer 1.0 API so we can finally retire the old GStreamer.com website.

I’d also like to thank Sebastian and Vivia for organising the hackfest and for making us all feel welcomed!

GStreamer Hackfest Venue

by ocrete at May 25, 2016 08:43 PM

May 17, 2016

Gustavo Padovan

Collabora contributions to Linux Kernel 4.6

Linux Kernel 4.6 was released this week, and a total of 9 Collabora engineers took part in its development, Collabora’s highest number of engineers contributing to a single Linux Kernel release yet. In total Collabora contributed 42 patches.

As part of Collabora’s continued commitment to further increase its participation to the Linux Kernel, Collabora is actively looking to expand its team of core software engineers. If you’d like to learn more, follow this link.

Here are some highlights of Collabora’s participation in Kernel 4.6:

Andrew Shadura fixed the number of buttons reported on the Pemount 6000 USB touchscreen controller, while Daniel Stone enabled BCM283x familiy devices in the ARM multi_v7_defconfig and Emilio López added module autoloading for a few sunxi devices.

Enric Balletbo i Serra added boot console output to AM335X(Sitara) and OMAP3-IGEP and fixed audio codec setup on AM335X using the right external clock. Martyn Welch added the USB device ID for the GE Healthcare cp210x serial device and renamed the reset reason of the Zodiac Watchdog.

Gustavo Padovan cleaned up the Android Sync Framework on the staging tree for further de-staging of the Sync File infrastructure, which will land in 4.7. Most of the work was removing interfaces that won’t be used in mainline. He also added vblank event support for atomic commits in the virtio DRM driver.

Peter Senna improved an error path and added some style fixes to the sisusbvga driver. While Sjoerd Simons enabled wireless on radxa Rock2 boards, fixed an issue withthe brcmfmac sdio driver sometimes timing out with a false positive and fixed some issues with Serial output on Renesas R-Car porter board.

Tomeu Vizoso changed driver_match_device() to return errors and in case of -EPROBE_DEFER queue the device for deferred probing, he also provided two fixes to Rockchip DRM driver as part of his work on making intel-gpu-tools work on other platforms.

Following is a list of all patches submitted by Collabora for this kernel release:

Andrew Shadura (1):

Daniel Stone (1):

Emilio López (4):

Enric Balletbo i Serra (3):

Gustavo Padovan (17):

Martyn Welch (2):

Peter Senna Tschudin (4):

Sjoerd Simons (6):

Tomeu Vizoso (4):

by Gustavo Padovan at May 17, 2016 03:39 PM

April 21, 2016

Tomeu Vizoso

Validating changes to KMS drivers with IGT

New DRM drivers are being added to almost each new kernel release, and because the mode setting API is so rich and complex, bugs do slip in that translate to differences in behaviour between drivers.

There have been previous attempts at writing test suites for validating changes and preventing regressions, but they have typically happened downstream and focused on the specific needs of specific products and limited to one or at most a few of different hardware platforms.

Writing these tests from scratch would have been an enormous amount of work, and gathering previous efforts and joining them wouldn't be much worth it because they were written using different test frameworks and in different programming languages. Also, there would be great overlap on the basic tests, and little would remain of the trickier stuff.

Of the existing test suites, the one with most coverage is intel-gpu-tools, used by the Intel graphics team. Though a big part is specific to the i915 driver, what uses the generic APIs is pretty much driver-independent and can be made to work with the other drivers without much effort. Also, Broadcom's Eric Anholt has already started adding tests for IOCTLs specific to the VideoCore-IV driver.

Collabora's Micah Fedke and Daniel Stone had added a facility for selecting DRM device files other than i915's and I improved the abstraction for creating buffers so it works for drivers without GEM buffers. Next I removed a bunch of superfluous dependencies on i915-only stuff and got a useful subset of tests to run on a Radxa Rock2 board (with the Rockchip 3288 SoC). Around half of these patches have been merged already and the other half are awaiting review. Meanwhile, Collabora's Robert Foss is running the ported tests on a Raspberry Pi 2 and has started sending patches to account for its peculiarities.

The next two big chunks of work are abstracting CRC checksums of frames (on drivers other than i915 this could be done with Google's Chamelium or with a board similar to Numato Opsis), and the buffer management API from libdrm that is currently i915-only (bufmgr). Something that will have to be dealt with in the future is abstracting the submittal of specific loads on the GPU as that's currently very much driver-specific.

Additionally, I will be scheduling jobs in our LAVA instance to run these tests on the boards we have in there.

Thanks to Google for sponsoring my time, to the Intel OTC folks for their support and reviews, and to Collabora for sponsoring Robert's, Micah's and Daniel's time.

by Tomeu Vizoso (noreply@blogger.com) at April 21, 2016 01:02 PM