Open Source at 20

Open source software has been around for a long time. But calling it open source only began in 1998. Here's some history:

Christine Peterson came up with the term "open source software" in 1997 and (as she reports at that link) a collection of like-minded geeks decided on February 3, 1998 to get behind it in a big way. Eric S. Raymond became the lead evangelist when he published Goodbye, "free software"; hello, "open source" on February 8th. Bruce Perens led creating the Open Source Initiative later that month. Here at Linux Journal, we were all over it from the start as well. (Here's one example.)

"Open source" took off so rapidly that O'Reilly started OSCON the next year, making this year's OSCON, happening now, the 19th one. (FWIW, at the 2005 OSCON, O'Reilly and Google together gave me an award for "Best Communicator" on the topic. I was at least among the most enthusiastic.)

Google's Ngram Viewer, which searches through all scanned books from 1800 to 2008, shows (see above) that use of "open source" hockey-sticked quickly. Today on Google, "open source" gets 116 million results.

But interest has been trailing off, as we see from Google Trends, which follows "interest over time." Here's how that looks since 2004:

IBM's New Security-First Nabla Container, Humble Bundle's "Linux Geek Bundle", Updates on the Upcoming Atari VCS Console, Redesigned Files App for Chromebooks and Catfish 1.4.6 Released

News briefs for July 17, 2018.

IBM has a new container called Nabla designed for security first, ZDNet reports. IBM claims it's "more secure than Docker or other containers by cutting operating system calls to the bare minimum and thereby reducing its attack surface as small as possible". See also this article for more information on Nabla and this article on how to get started running the containers.

Humble Bundle is offering a "Linux Geek Bundle" of ebooks from No Starch Press for $1 (or more—your choice) right now, in connection with It's FOSS. The Linux Geek bundle's books are worth $571 and are available in PDF, ePUB and MOBI format, and are DRM-free. Part of the purchase price will be donated to the EFF. See the It's FOSS post for the list of titles and more info.

More information on the upcoming Atari VCS console due to launch next year has been released in a Q&A on Medium with Rob Wyatt, System Architect for the Atari VCS project. Rob provides more details on the hardware specs: "The VCS hardware will be powered by an AMD Bristol Ridge family APU with Radeon R7 graphics and is now going to get 8 gigabytes of unified memory. This is a huge upgrade from what was originally specified and unlike other consoles it's all available, we won't reserve 25% of hardware resources for system use." In addition, the Q&A covers the Atari VCS "open platform" and "Sandbox", compatible controllers and more.

Google's Chrome OS team is working on redesigning its Files app for Chromebooks "with a new 'My Files' section that promises to help you better organize your local files, including those from any Android and Linux apps you might have installed." See the Softpedia News post for more information on this redesigned app for Android and Linux files and how to test it via the Chrome OS Canary experimental channel.

Catfish 1.4.6 has been released, and it has now officially joined the Xfce family. According to the announcement, it's "lightweight, fast, and a perfect companion to the Thunar file manager. With the transition from Launchpad to Xfce, things have moved around a bit. Update your bookmarks accordingly!" Other new features include an improved thumbnailer, translation updates and several bug fixes. New releases of Catfish now can be found at the Xfce release archive.

A Look at Google's Project Fi

Google's Project Fi is a great cell-phone service, but the data-only SIMs make it incredible for network projects!

I have a lot of cell phones. I have iPhones (old and new), Android phones (old, new, very old and funny-shaped), and I have a few legacy phones that aren't either Android or iPhone. Remember Maemo? Yeah, and I still have one of those old Nokia phones somewhere too. Admittedly, part of the reason I have such a collection is that I tend to hoard nostalgic technology, but part of it is practical too.

I've used phones as IP cameras for BirdTopia (my recorded and streamed bird-feeder collection). I've created WiFi-only audiobook devices that I use when I'm out and about. I've used old phones as SONOS remotes, Plex players, Chromecast initiators and countless other tasks that tiny little computers are perfect for doing. One of the frustrating things about using old cell phones for projects like that though is they only have WiFi access, because adding multiple devices to a cell plan becomes expensive quickly. That's not the case anymore, however, thanks to Google's Project Fi.

Most people love Project Fi because of the tower-hopping features or because of the fair pricing. I like those features too, but the real bonus for me is the "data only" SIM option. Like most people, I rarely make phone calls anymore, and there are so many chat apps, texting isn't very important either. With most cell-phone plans, there's an "access" fee per line. With Project Fi, additional devices don't cost anything more! (But, more about that later.) The Project Fi experience is worth investigating.

What's the Deal?

Project Fi is a play on the term "WiFi" and is pronounced "Project Fye", as opposed to "Project Fee", which is what I called it at first. Several features set Project Fi apart from other cell-phone plans.

First, Project Fi uses towers from three carriers: T-Mobile, US Cellular and Sprint. When using supported hardware, Project Fi constantly monitors signal strength and seamlessly transitions between the various towers. Depending on where you live, this can mean constant access to the fastest network or a better chance of having any coverage at all. (I'm in the latter group, as I live in a rural area.)

The second standout feature of Project Fi is the pricing model. Every phone pays a $20/month fee for unlimited calls and texts. On top of that, all phones and devices share a data pool that costs $10/GB. The data cost isn't remarkably low, but Google handles it very well. I recently discovered that it's not billed in full $10 increments (Figure 1). If you use 10.01GB of data, you pay $10.01, not $20.

Journalisez les actions de vos utilisateurs avec Auditd

Au-delà de la tendance à la journalisation et l’audit à tous crins, de nombreuses règlementations imposent de tracer les actions réalisées par les utilisateurs d’un système. Le framework Auditd, disponible nativement sur la majeure partie des distributions GNU/Linux, permet de répondre à ces exigences en surveillant les activités d’un système. Il permet de générer des journaux d’événements afin d’enregistrer des informations sur les différentes activités qui rythment la vie d’un système, des accès aux fichiers en passant par les processus exécutés par des administrateurs.

Au sommaire de l’article

1 Un peu d’histoire

2 Architecture

2.1 Mode de fonctionnement

2.2 Composants

3 Installation

3.1 Configuration

3.2 Pluggable Authentication Module

4 Règles

4.1 Type

4.2 Édition

4.3 Implémentation

4.3.1 Règles de contrôle

4.3.2 Règles sur les accès au système de fichiers

4.3.3 Règles sur les appels système

4.3.4 Labels

4.4 Journaux

4.4.1 Gestion

4.4.2 Propriétés

4.4.3 Analyse

4.5 Impacts sur le système

4.5.1 Évaluation

4.5.2 Optimisations Organisation Filtres Exclusions

5 Les outils disponibles

5.1 La gestion avec auditctl

5.2 La recherche avec ausearch

5.3 La génération de rapports avec aureport

5.4 Le debug avec autrace

6 Un keylogger avec pam_tty_audit

6.1 Configuration

6.2 Journaux

6.2.1 Structure

6.2.2 Analyse

7 Protection

7.1 Prévention de la perte de journaux

7.1.1 Politique de rétention

7.1.2 Alertes

7.2 Protection de la politique d’audit

8 Centralisation des journaux d’audit

8.1 Déport avec le plugin remote

8.1.1 Configuration du collecteur

8.1.2 Configuration des clients

8.2 Déport avec Syslog

9 Supervision

9.1 Visualisation à l’aide de scripts

9.2 Intégration dans un SIEM

10 Limites

10.1 POC

10.2 Contre-mesures

11 Pour aller plus loin



Christian Perez

 > Lire l’intégralité de cet article sur notre plateforme de lecture en ligne Connect  

Retrouvez cet article (et bien d’autres) dans GNU/Linux Magazine Hors-série n°93, disponible sur la boutique et sur Connect !

Debian "stretch" 9.5 Update Now Available, Red Hat Announces New Adopters of the GPL Cooperation Commitment, Linux Audio Conference 2018 Videos Now Available, Latte Dock v0.8 Released and More

News briefs for July 16, 2018.

Debian "stretch" has a new update, 9.5, the fifth update of the Debian 9 stable release. This version addresses several security issues and other problems. You can upgrade your current installation from one of Debian's HTTP mirrors.

Red Hat announced that 14 additional companies have adopted the GPL Cooperation Commitment, which means that "more than 39 percent of corporate contributions to the Linux kernel, including six of the top 10 contributors" are now represented. According to the Red Hat press release, these commitments "reflect the belief that responsible compliance in open source licensing is important and that license enforcement in the open source ecosystem operates by different norms." Companies joining the growing movement include Amazon, Arm, Canonical, GitLab, Intel Corporation, Liferay, Linaro, MariaDB, NEC, Pivotal, Royal Philips, SAS, Toyota and VMware.

The Linux Audio Conference announced that all videos from the 2018 conference in Berlin are now available. You can find the links here.

Latte Dock v0.8 is now available. New features include multiple layouts simultaneously, smart dynamic background, unify global shortcuts for applets and tasks, and much more. Latte v0.8 is compatible with Plasma >= 5.12, KDE Frameworks >= 5.38, Qt >= 5.9. You can download it from here.

Ubuntu has improved the user interface of its Snap Store website. It's FOSS reports that the updates make "it more useful for the users by adding developer verification, categories, improved search".

Opinion: GitHub vs GitLab

gitlab logo

Free software deserves free tools, not Microsoft-owned GitHub.

So, Microsoft bought GitHub, and many people are confused or worried. It's not a new phenomenon when any large company buys any smaller company, and people are right to be worried, although I argue that their timing is wrong. Like Microsoft, GitHub has made some useful contributions to free and open-source software, but let's not forget that GitHub's main product is proprietary software. And, it's not just some innocuous web service either; GitHub makes and sells a proprietary software package you can download and run on your own server called GitHub Enterprise (GHE).

Let's remember how we got here. BitMover made a tool called BitKeeper, a proprietary version control system that allowed free-of-charge licenses to free software projects. In 2002, the Linux kernel switched to using BitKeeper for its version control, although some notable developers made the noble choice to refuse to use the proprietary program. Many others did not, and for a number of years, kernel development was hampered by BitKeeper's restrictive noncommercial licenses.

In 2005, Andrew Tridgell, working at OSDL, developed a client that bypassed this restriction, and as a result, BitMover removed licenses to BitKeeper from all OSDL employees—including Linus Torvalds. Eventually, all non-commercial licenses were stopped, and new licenses included clauses preventing the development of alternative version control systems. As a result of this, two new projects were born: Mercurial and Git. Created in a few short weeks in 2005, Git quickly became the version control system for Linux development.

Proprietary version control tools aren't common in free software development, but proprietary collaboration websites have been around for some time. One of the earliest collaboration websites still around today is Sourceforge. Sourceforge was created in the late 1990s by VA Software, and the code behind the project was released in 2000.

Quickly this situation changed, and the project was shuttered and then became Sourceforge Enterprise Edition, a proprietary software package. The code that ran Sourceforge was forked into GNU Savannah (later Savane) and GForge, and it's still use today by both the GNU Project and CERN. When I last wrote about this problem, almost exactly ten years ago, Canonical's ambitious Launchpad service still was proprietary, something later remedied in 2009. Gitorious was created in 2010 and was for a number of years the Git hosting platform for the discerning free software developer, as the code for Gitorious was fully public and licensed under favorable terms for the new wave of AGPL-licensed projects that followed the FSF's Franklin Street Statement. Gitorious, also, is sadly no longer with us.

Allwinner VPU support in mainline Linux status update (week 28)

This week was the occasion to send out version 5 of the Sunxi-Cedrus VPU driver, that uses version 16 of the media requests API. The API contains the necessary internal plumbing for tying specific metadata (exposed as v4l2 controls, that are structures of data set by userspace) about the current video frame to decode with … Continue reading "Allwinner VPU support in mainline Linux status update (week 28)"

Python and Its Community Enter a New Phase

On Python's BDFL Guido van Rossum, his dedication to the Python community, PEP 572 and hope for a healthy outcome for the language, open source and the computing world in general.

Python is an amazing programming language, there's no doubt about it. From humble beginnings in 1991, it's now just about everywhere. Whether you're doing web development, system administration, test automation, devops or data science, odds are good that Python is playing a role in your work.

Even if you're not using Python directly, odds are good that it is being used behind the scenes. Using OpenStack? Python plays an integral role in its development and configuration. Using Dropbox on your computer? Then you've got a copy of Python running on your computer. Using Linux? When I purchased Red Hat Linux back in 1995, the configuration was a breeze—thanks to visual tools developed in Python.

And, of course, there are numerous schools and educational programs that are now teaching Python. MIT's intro computer science course switched several years ago from Scheme to Python, and thousands of universities all over the world made a similar switch in its wake. My 15-year-old daughter participates in a program for technology and entrepreneurship—and she's learning Python.

There currently is an almost insatiable demand for Python developers. Indeed, Stack Overflow reported last year that Python is not only the most popular language on its site, but it's also the fastest-growing language. I can attest to this popularity in my own job as a freelance Python trainer. Some of the largest computer companies in the world are now using Python on a regular basis, and their use of the language is growing, not shrinking.

Normally, a technology with this much impact would require a large and active marketing department. But Python is (of course) open-source software, and its success is the result of a large number of contributors—to the core language, to its documentation, to libraries and to the numerous blogs, tutorials, articles and videos available online. I often remind my students that people often think of "open source" as a synonym for "free of charge", but that they should instead think of it as a synonym for "powered by the community"—and there's no doubt that the Python community is strong.

Such a strong community doesn't come from nowhere. And there's no doubt that Guido van Rossum, who created Python and has led its development ever since, has been a supremely effective community organizer and leader.

FOSS Project Spotlight: Pydio Cells, an Enterprise-Focused File-Sharing Solution

Pydio Cells is a brand-new product focused on the needs of enterprises and large organizations, brought to you from the people who launched the concept of the open-source file sharing and synchronization solution in 2008. The concept behind Pydio Cells is challenging: to be to file sharing what Slack has been to chats—that is, a revolution in terms of the number of features, power and ease of use.

In order to reach this objective, Pydio's development team has switched from the old-school development stack (Apache and PHP) to Google's Go language to overcome the bottleneck represented by legacy technologies. Today, Pydio Cells offers a faster, more scalable microservice architecture that is in tune with dynamic modern enterprise environments.

In fact, Pydio's new "Cells" concept delivers file sharing as a modern collaborative app. Users are free to create flexible group spaces for sharing based on their own ways of working with dedicated in-app messaging for improved collaboration.

In addition, the enterprise data management functionality gives both companies and administrators reassurance, with controls and reporting that directly answer corporate requirements around the General Data Protection Regulation (GDPR) and other tightening data protection regulations.

Pydio Loves DevOps

In tune with modern enterprise DevOps environments, Pydio Cells now runs as its own application server (offering a dependency-free binary, with no need for external libraries or runtime environments). The application is available as a Docker image, and it offers out-of-the-box connectors for containerized application orchestrators, such as Kubernetes.

Also, the application has been broken up into a series of logical microservices. Within this new architecture, each service is allocated its own storage and persistence, and can be scaled independently. This enables you to manage and scale Pydio more efficiently, allocating resources to each specific service.

The move to Golang has delivered a ten-fold improvement in performance. At the same time, by breaking the application into logical microservices, larger users can scale the application by targeting greater resources only to the services that require it, rather than inefficiently scaling the entire solution.

Built on Standards

The new Pydio Cells architecture has been built with a renewed focus on the most popular modern open standards:

Chrome Browser Launching Mitigation for Spectre Attacks, The Linux Foundation Announces LF Energy Coalition, Kube 0.7.0 Now Available, New Android Apps for Nativ Vita Hi-Res Music Server and More

News briefs for July 13, 2018.

Google's Chrome browser is launching site isolation, "the most ambitious mitigation for Spectre attacks", Ars Technica reports. Site isolation "segregates code and data from each Internet domain into their own 'renderer processes', which are individual browser tasks that aren't allowed to interact with each other". This has been optional in Chrome for months, but starting with version 67, it will be enabled by default for 99% of users.

The Linux Foundation yesterday launched LF Energy, a new open-source coalition. According to the press release, LF Energy was formed "with support from RTE, Europe's biggest transmission power systems provider, and other organizations, to speed technological innovation and transform the energy mix across the world." Visit for more information.

Version 0.7.0 of Kube, the "modern communication and collaboration client", is now available. Improvements include "a conversation view that allows you to read through conversations in chronological order"; "a conversation list that bundles all messages of a conversation (thread) together"; "automatic attachment of own public key"; "the account setup can be fully scripted through the sinksh commandline interface"; and more. See for more info.

Nativ announced new iOS and Android apps for its Nativ Vita Hi-Res Music Server. The new apps, available from the Google Play Store, "give customers convenient control and playback functionality from their iOS or Android Smartphone or Tablet".

KDE released the third stability update for KDE Applications 18.04 yesterday. The release contains translation updates and bug fixes only, including improvements to Kontact, Ark, Cantor, Dolphin, Gwenview, KMag, among others. The full list of changes is available here.

NVIDIA announced its Jetson Xavier Developer Kit for the octa-core AI/robotics-focused Xavier module. According to Linux Gizmos, "the kit, which goes on sale for $1,300 in August, offers the first access to Xavier aside from the earlier Drive PX Pegasus autonomous car computer board, which incorporates up to 4x Xavier modules. The kit includes Xavier's Linux-based stack and Isaac SDK."

Mozilla announced the winners of 2018H1 Mozilla Research grants. Eight proposals were selected, "ranging from tools to fight online harassment to systems for generating speech. All these projects support Mozilla's mission to make the Internet safer, more empowering, and more accessible." See the Research Grants page for more info on the grants and how to apply.

Empowering Linux Developers for the New Wave of Innovation

snapcraft logo

New businesses with software at their core are being created every day. Developers are the lifeblood of so much of what is being built and of technological innovation, and they are ever more vital to operations across the entire business. So why wouldn't we empower them?

Machine learning and IoT in particular offer huge opportunities for developers, especially those facing the crowded markets of other platforms, to engage with a sizeable untapped audience.

That Linux is open source makes it an amazing breeding ground for innovation. Developers aren’t constrained by closed ecosystems, meaning that Linux has long been the operating system of choice for developers. So by engaging with Linux, businesses can attract the best available developer skills. 

The Linux ecosystem has always strived for a high degree of quality. Historically it was the Linux community taking sole responsibility for packaging software, gating each application update with careful review to ensure it worked as advertised on each distribution of Linux. This proved difficult for all sides.

Broad access to the code was needed, and open-source software could be offered through the app store. User support requests and bugs were channelled through the Linux distributions, and there was such a volume of reporting, it became difficult to feed information back to the appropriate software authors.

As the number of applications and Linux distributions grew, it became increasingly clear this model would not scale much further. Software authors took matters into their own hands, often picking a single Linux distribution to support and skipping the app store entirely. Because of this, they lost app discoverability and gained the complexity of running duplicative infrastructure.

This placed increased responsibility on developers at a time when the expectations of their role was already expanding. They are no longer just makers, they now bear the risk of breaking robotic arms with their code or bringing down MRI machines with a patch.

As an industry we acknowledge this problem—you can potentially have a bad update and software isn’t an exact science—but we then ask these developers to roll the dice. Do you risk compromise or self-inflicted harm?

Meanwhile the surface area increases. The industry continues a steady march of automation, creating ever more software components to plug together and layer solutions on. Not only do developers face the update question for their own code, they also must trust all developers facing that same decision in all the code beneath their own.

Electronique imprimée : technologies et applications - Cesson Sévigné (35) Le 5 septembre 2018

L'électronique flexible et imprimée est une technologie de rupture : des fonctions plus ou moins complexes (imprimées ou hybrides) réalisées avec des matériaux organiques (carbone, hydrogène) ou inorganiques sur des substrats flexibles ou rigides (verre, papier) en utilisant des encres conductrices déposées par des techniques d'impression traditionnelles (déroulé ou feuille à feuille) sur de grandes surfaces. Le rapport coût/performance de cette nouvelle technologie la rend très attractive.

Les nombreuses applications envisagées (avec de nouvelles fonctions qui ne peuvent être réalisées qu'en électronique imprimée) en font un champ de recherche majeur. Finesse, légèreté, solidité, flexibilité, conformabilité font que cette technologie s'intègre facilement aux systèmes existants.

Les applications sont multiples : aéronautique, automobile, industrie, bâtiment, médical, textile, emballage…

En partenariat avec l'association AFELIM et CENTRALESUPELEC, CAP'TRONIC vous propose une rencontre autour de l'électronique imprimée afin de découvrir cette filière et les multiples applications possibles pour vos produits.


- 09H00 Accueil des participants

Jean-Luc FLEUREAU - Bernard JOUGA

- 09H50 AFELIM : La filière française ELECTRONIQUE IMPRIMEE

- 10H10 LUMOMAT : Matériaux moléculaires pour l'électronique et la photonique organique
Laurence LAVENOT

- 10H30 ACCELONIX : Equipements dédiés

- 10H50 MARTIN TECHNOLOGIES : Produits et applications

- 11H10 Pause

- 11H40 PROTAVIC INTERNATIONAL : Encres conductrices et colles
Alexandre LONG

- 12H00 CERADROP MGI / IETR : Electronique imprimée et impression 3D

- 12H20 ARJOWIGGINS CREATIVE PAPERS : Le papier connecté

- 12H40 SERIBASE : Exemples d'applications
Dominique BEDOUET

- 13H00 Networking Cocktail

- 14H30 Visite de l'IETR

Contact :

Jean-Luc FLEUREAU - 06 63 00 86 98

Lieu de l'événement :

Campus de Rennes
Avenue de la Boulaie
35510 Cesson-Sévigné

Guido van Rossum Stepping Down from Role as Python's Benevolent Dictator For Life

Python's Benevolent Dictator For Life (BDFL) Guido van Rossum today announced he's stepping down from the role.

On the Python mailing list today, van Rossum said, "I would like to remove myself entirely from the decision process. I'll still be there for a while as an ordinary core dev, and I'll still be available to mentor people—possibly more available. But I'm basically giving myself a permanent vacation from being BDFL, and you all will be on your own."

He credits his decision to step down as partly due to his experience with the turmoil over PEP 572: "Now that PEP 572 is done, I don't ever want to have to fight so hard for a PEP and find that so many people despise my decisions."

van Rossum says he will not appoint a successor and leave that to the development team to decide upon.

For old-time's sake, see Linux Journal's interview with Guido van Rossum from 1998.

Matinale sur la Cybersécurité - CESSON SEVIGNE (35) Le 27 septembre 2018

CAP'TRONIC s'associe à Rennes Atalante et au Poool et vous convie à une Matinale sur le thème de la Cybersécurité.

Tourisme et numérique - Lannion (22) Le 8 novembre 2018

Informations et programme prochainement

LLVM/Clang integration into Buildroot

Dans le cadre de mon projet de fin d’études, j’ai travaillé sur l’intégration de LLVM et Clang dans Buildroot. Je vous en presente un résumé dans cet article. Pour sa lecture, la connaissance des principaux aspects de Buildroot, tels que la cross-compilation et la création de paquets est requise. L’idée de rédiger cet article en anglais est de pouvoir le partager avec toute la communauté Buildroot. Si vous souhaitez aller plus loin, un lien vers mon rapport de stage est present à la fin de l’article.

In this article I’ll be discussing my internship project, which is the integration of LLVM and Clang into Buildroot. LLVM as a compiler infrastructure can play both roles in Buildroot: one one hand, it can be seen as a target package which provides functionalities such as code optimization and just-in-time compilation to other packages, whereas on the other hand it opens the possibility of creating a cross-compilation toolchain that could be an alternative to Buildroot’s default one, which is based on GNU tools.

This article is mainly focused on LLVM as a target package. Nevertheless, it also discusses some relevant aspects which need to be considered when building an LLVM/Clang-based cross-compilation toolchain.

The article is organized in a way that the technologies involved in the project are first introduced in order to provide the reader with the necessary information to understand the main objectives of it and interpret how software components interact with each other.


LLVM is an open source project that provides a set of low level toolchain components  (assemblers, compilers, debuggers, etc.) which are designed to be compatible with existing tools typically used on Unix systems. While LLVM provides some unique capabilities and is known for some of its tools, such as Clang (C/C++/Objective-C/OpenCL C compiler frontend), the main thing that distinguishes LLVM from other compilers is its internal architecture.

This project is different from most traditional compiler projects (such as GCC) because it is not just a collection of individual programs, but rather a collection of libraries that can be used to build compilers, optimizers, JIT code generators and other compiler-related programs. LLVM is an umbrella project, which means that it has several subprojects, such as LLVM Core (main libraries), Clang, lldb, compiler-rt, libclc, and lld among others.

Nowadays, LLVM is being used as a base platform to enable the implementation of statically and runtime compiled programming languages, such as C/C++, Java, Kotlin, Rust and Swift. However, LLVM is not only being used as a traditional toolchain but is also popular in graphics,  such is the case of:

• llvmpipe (software rasterizer)
• CUDA (NVIDIA Compiler SDK based on LLVM)
• AMDGPU open source drivers
• Most of OpenCL implementations are based on Clang/LLVM

Internal aspects

LLVM’s compilation strategy follows a three-phase approach where the main components are: the frontend, the optimizer and the backend. Each phase is responsible for translating the input program into a different representation, making it closer to the target language.

Figure 1: Three-phase approach


The frontend is the component in charge of validating the input source code, checking and diagnosing errors, and translating it in from its original language (eg. C/C++) to an intermediate representation (LLVM IR in this case) by doing lexical, syntactical and semantic analysis. Apart from doing the translation, the frontend can also perform optimizations that are language-specific.


The LLVM IR is a complete virtual instruction set used throughout all phases of the compilation strategy, and has the main following characteristics:

• Mostly architecture-independent instruction set (RISC)
• Strongly typed
– Single value types (eg. i8, i32, double)
– Pointer types (eg. *i8, *i32)
– Array types, structure types, function types, etc.
• Unlimited number of virtual registers in Static Single Assignment (SSA)

Intermediate Representation is the core of LLVM. It is fearly readable, as it was designed in a way that is easy for the frontends to generate but expressive enough to allow effective optimizations that produce fast code for real targets. This intermediate representation exists in three forms: a textual human-readable assembly format (.ll), an in-memory data structure and an on-disk binary ”bitcode format” (.bc). LLVM provides tools to convert from from textual format to bitcode (llvm-as) and viceversa (llvm-dis). Below is an example of how LLVM IR looks like:

Figure 2: LLVM Intermediate Representation


In general, the two main objectives of the optimization phase are improving the execution time of the program and reducing its code size. The strategy proposed by LLVM is designed to achieve high performance executables through a system of continuous optimization. Because all of the LLVM optimizations are modular (called passes), it is possible to use all of them or only a subset. There are Analysis Passes and Transformation Passes. The first ones compute some information about some IR unit (modules, functions, blocks, instructions) without mutating it and produce a result which can be queried by other passes. On the other hand, a Transformation Pass transforms a unit of IR in some way, leading to a more efficient code (also in IR). Every LLVM pass has a specific objective, such dead code elimination, constant propagation, combination of redundant instructions, dead argument elimination, and many others.


This component, also known as code generator, is responsible for translating a program in LLVM IR into optimized target-specific assembly. The main tasks carried out by the backend are register allocation, instruction selection and instruction scheduling. Instruction selection is the process of translating LLVM IR operations into instructions available on the target architecture, taking advantage of specific hardware features that can lead to more efficient code. Register allocation involves mapping variables stored in the IR virtual registers onto real registers available in the target architecture, taking into consideration the calling convention defined in the ABI. Once these tasks and others such as memory allocation and instruction ordering are performed, the backend is ready to emit the corresponding assembly code, generating either a text file or an ELF object file as output.


The main advantage of the three-phase model adopted by LLVM is the possibility of reusing components, as the optimizer always works with LLVM IR. This eases the task of supporting new languages, as new frontends which generate LLVM IR can be developed while reusing the optimizer and backend. On the other hand, it is possible to bring support for more target architectures by writing a backend and reusing the frontend and the optimizer.


Clang is an open source compiler frontend for C/C++, Objective-C and OpenCL C for LLVM, therefore it can use LLVM’s optimizer to produce efficient code. Since the start of its development in 2005, Clang has been focused on providing expressive diagnostics and an easy IDE integration. As LLVM, it is written in C++ and has a library-based architecture, which allows, for example, IDEs to use its parser to help developers with autocompletion and refactoring. Clang was designed to offer GCC compatibility, so it accepts most GCC’s command line arguments to specify the compiler options. However, GCC offers a lot of extensions to the standard language while Clang’s purpose is being standard-compliant. Because of this, Clang cannot be a replacement for GCC when compiling projects that depend on GCC extensions, as it happens with Linux kernel. In this case, Linux does not build because Clang does not accept the following kinds of constructs:

• Variable length arrays inside structures
• Nested Functions
• Explicit register variables

Furthermore, Linux kernel still depends on GNU assembler and linker.

An interesting feature of Clang is that, as opposed to GCC, it can compile for multiple targets from the same binary, that is, it is a cross-compiler itself. To control the target the code will be generated for, it is necessary to specify the target triple in the command line by using the  –target =< triple > option. For example, –target=armv7-linux-gnueabihf corresponds to the following system:

• Architecture: arm
• Sub-architecture: v7
• Vendor:unknown
• OS: linux
• Environment: GNU

Linux graphics stack

This section intends to give an introduction to the Linux graphics stack in order to explain the role of LLVM inside this complex system comprised of many open source components that interact with each other. Fig. 3 shows all the components involved when 2D and 3D applications require rendering services from an AMD GPU using X:

Figure 3: Typical Linux open source graphics stack for AMD GPUs

X Window System

X  is a software system that provides 2D rendering services to allow applications creating graphical user interfaces. It is based on a client-server architecture and exposes its services such as managing windows, displays and input devices through two shared libraries called Xlib and XCB. Given that X uses network client-server technology, it is not efficient when handling 3D applications due to its latency. Because of this, there exists a software system called Direct Rendering Infrastructure (DRI) which provides a faster path between applications and graphics hardware.

The DRI/DRM infrastructure

The Direct Rendering Infrastructure is a subsystem that allows applications using X Server to communicate with the graphics hardware directly. The most important component of DRI is the Direct Rendering Manager, which is a kernel module that provides multiple services:

• Initialization of GPU such as uploading firmwares or setting up DMA areas.
• Kernel Mode Setting(KMS): setting display resolution, colour depth and refresh rate.
• Multiplexing access to rendering hardware among multiple user-space applications.
• Video memory management and security.

DRM exposes all its services to user-space applications through libdrm. As most of these services are device-specific, there are different DRM drivers for each GPU, such as libDRM-intel, libDRM-radeon, libDRM-amdgpu, libDRM-nouveau, etc. This library is intended to be used by X Server Display Drivers (such as xserver-xorg-video-radeon, xserver-xorg-video-nvidia, etc.) and Mesa 3D, which provides an open source implementation of the OpenGL specification.

Mesa 3D

OpenGL is a specification that describes an API for rendering 2D and 3D graphics by exploiting the capabilities of the underlying hardware. Mesa 3D is a collection of open source user-space graphics drivers that implement a translation layer between OpenGL and the kernel-space graphics drivers and exposes the OpenGL API as Mesa takes advantage of the DRI/DRM infrastructure to access the hardware directly and output its graphics to a window allocated by the X server, which is done by GLX, an extension that binds OpenGL to the X Window System.

Mesa provides multiple drivers for AMD, Nvidia and Intel GPUs and also provides some software implementations of 3D rendering, useful for platforms that do not have a dedicated GPU. Mesa drivers are divided in two groups: Messa Classics and Gallium 3D. The second group is a set of utilities and common code that is shared by multiple drivers, such as nouveau (Nvidia), RadeonSI (AMD GCN) and softpipe (CPU).

As shown in Fig.4, LLVM is used by llvmpipe and RadeonSI, but it can optionally be used by r600g if OpenCL support is needed. The llvmpipe is a multithreaded software rasterizer uses LLVM to do JIT compilation of GLSL shaders. Shaders, point/line/triangle rasterization and vertex processing are implemented in LLVM IR, which is then translated to machine code. Another much more optimized software rasterizer is OpenSWR, which is developed by Intel and targets x86_64 processors with AVX or AVX2 capabilities. Both llvmpipe and OpenSWR present a much faster alternative to the classic Mesa’s single-threaded softpipe software rasterizer.

Figure 4: Mesa 3D drivers

LLVM/Clang for Buildroot

The main objective of this internship was creating LLVM and Clang packages for Buildroot. These packages activate new functionalities such as Mesa 3D’s llvmpipe software rasterizer (useful for systems which do not have a dedicated GPU), RadeonSI (Gallium 3D driver for AMD GCN) and also provide the necessary components to allow the integration of OpenCL implementations. Once LLVM is present on the system, new packages that rely on this infrastructure can be added.

Buildroot Developers Meeting

After some research concerning the state of the art of the LLVM project, the objectives of the internship were presented and discussed at the Buildroot Developers Meeting in Brussels, obtaining the following conclusions:

• LLVM itself is very useful for other packages (Mesa 3D’s llvmpipe , OpenJDK’s jit compiler, etc.).
• It is questionable whether there is a need for Clang in Buildroot, as GCC is still needed and it has mostly caught up with Clang regarding performance, diagnostics and static analysis. It would be possible to build a complete userspace but some packages may break.
• It could be useful to have a host-clang package that is user selectable.
• The long-term goal is to have a complete clang-based toolchain.

LLVM package

LLVM comes as a set of libraries with many purposes, such as working with LLVM IR, doing Analysis or Transformation passes, code generation ,etc. The build system allows to gather all these components and generate a shared library called, which is the only necessary file that should be installed on the target system to provide support to other packages.

Some considerations

In order to cross-compile LLVM for the target, llvm-config and llvm-tblgen tools must first be compiled for the host. At the start of the project, a minimal version of host-llvm containing only these two tools was built by setting HOST_LLVM_MAKE_OPTS = llvm-tblgen llvm-config

The most important options to set are the following ones:

• Path to host’s llvm-tblgen: –DLLVM_TABLEGEN
• Default target triple: –DLLVM_DEFAULT_TARGET_TRIPLE
• Host triple (native code generation for the target): –DLLVM_HOST_TRIPLE
• Target architecture: –DLLVM_TARGET_ARCH
• Targets to build (only necessary backends): –DLLVM_TARGETS_TO_BUILD


llvm-config is a program that prints compiler flags, linker flags and other configuration-related information used by packages that need to link against LLVM libraries. In general, configure programs are scripts but llvm-config is a binary. Because of this, llvm-config compiled for the host needs to be placed in STAGING_DIR as llvm-config compiled for the target cannot run on the host:

Figure 5: llvm-config

To get the correct output from llvm-config when configuring target packages which link against, host-llvm must be built using the same options (except that llvm tools are not built for the target) and host-llvm tools must be linked against (building only llvm-tblgen and llvm-config is not sufficient). For example, Mesa 3D will check for the AMDGPU backend when built with LLVM support and selecting Gallium R600 or RadeonSI drivers:

llvm_add_target() {

    if $LLVM_CONFIG --targets-built | grep -iqw $new_llvm_target ; then
        llvm_add_component $new_llvm_target $driver_name
        AC_MSG_ERROR([LLVM target '$new_llvm_target' not enabled in your LLVM build. Required by $driver_name.])

If AMDGPU backend is not built for the host, llvm-config –targets-built will make the build fail. Another important thing is to set LLVM_LINK_LLVM_DYLIB, because if this option is not enabled, llvm-config –shared-mode will output “static” instead of “shared”, leading to statically linking libLLVM.

Some benchmarks

It was decided to run GLMark2 and GLMark2-es2  benchmarks (available in Buildroot) to test OpenGL 2.0 and OpenGL ES 2.0 rendering performance respectively on different architectures. The available hardware allowed to test x86_64, ARM, AArch64 and AMDGPU LLVM backends and verify the better performance of llvmpipe with respect to softpipe:

  • Platform 1 – x86_64 (HP ProBook)
    • Processor: AMD A4-3300M Dual Core (SSE3) @ 1.9 GHz
    • GPU: AMD Radeon Dual Graphics (HD6480G + HD7450)
  • Platform 2 – ARM (Raspberry Pi 2 Model B)
    • Processor: ARMv7 Cortex-A7 Quad Core @ 900 MHz
    • GPU: Broadcom Videocore IV
  • Platform 3 – ARM/AArch64 (Raspberry Pi 3 Model B)
    • Processor: ARMv8 Cortex-A53 Quad Core @ 1.2 GHz
    • GPU: Broadcom Videocore IV

Table 1: GLMark2 and GLMark2-es2 results


Once LLVM was tested working on the more common architectures , the next goal was activating OpenCL support. This task involved multiple steps, as there are some dependencies which need to be satisfied.
OpenCL is an API enabling general purpose computing on GPUs (GPGPU) and other devices (CPUs, DSPs, FPGAs, ASICs, etc.), being well suited for certain kinds of parallel computations, such as hash cracking (SHA, MD5, etc.), image processing and simulations. OpenCL presents itself as a library with a simple interface:

• Standarized API headers for C and C++
• The OpenCL library (, which is a collection of types and functions which all conforming implementations must provide.

The standard is made to provide many OpenCL platforms on one system, where each pltform can see various devices. Each device has certain compute characteristics (number of compute units, optimal vector size, memory limits, etc). The OpenCL standard allows to load OpenCL kernels which are pieces of C99-like code that is JIT-compiled byt he OpenCL implementations (most of them rely on LLVM to work), and execute these kernels on the target hardware. Functions are provided to compile the kernels, load them, transfer data back and forth from the target devices, etc.

There are multiple open source OpenCL implementations for Linux:

Clover (Computing Language over Gallium)
It is a hardware independent OpenCL API implementation that works with Gallium Drivers (hardware dependent userspace GPU drivers) which was merged into Mesa3D in 2012. It currently supports OpenCL 1.1 and it is close to 1.2. It has the following dependencies:

  • libclang: provides an OpenCL C compiler frontend and generates LLVM IR.
  • libLLVM: LLVM IR optimization passes and hardware dependent code generation
  • libclc: implementation of the OpenCL C standard library in LLVM IR bitcode providing device builtin functions. It is linked at runtime.

It curently works with Gallium R600 and RadeonSI.

This implementation is OpenCL 1.2 standard compliant and supports some 2.0 features. The major goal of this project is to improve performance portability of OpenCL programs, reducing the need for target-dependent manual optimizations. Pocl currently supports many CPUs (x86, ARM, MIPS, PowerPC), NVIDIA GPUs via CUDA (experimental), HSA-supported GPUs and multiple private off-tree targets. It also works with libclang and libLLVM but it has its own Pocl Builtin Lib (instead of using libclc).

It targets Intel GPUs (HD and Iris) starting with Ivy Bridge, and offers OpenCL 2.0 support for Skylake, Kaby Lake and Apollo Lake.

This implementation by AMD targets ROCm (Radeon Open Compute) compatible hardware (HPC/Hyperscale), providing OpenCL 1.2 API with OpenCL C 2.0. It has become open source in May 2017.

Because of this fragmentation concerning OpenCL implementations (without taking into account the propietary ones) there exists a program that allows multiple implemen-tations to co-exist on the same sytem: OpenCL ICD (Installable Client Driver). It needs the following components to work: (ICD loader): this library dispatches the OpenCL calls to OpenCL implementations.
/etc/OpenCL/vendors/*.icd: these files tell the ICD loader which OpenCL implementations (ICDs) are installed on the sytem. Each file has a single line containing the name of the shared library with the implementation.
One or more OpenCL implementations (the ICDs): the shared libraries pointed by the .icd files.

Clover integration

Considering that the available system for tests has an AMD Radeon Dual Graphics GPU (integrated HD6480G + dedicated HD7450M) and that Mesa 3D is already present in Buildroot, it was decided to work with the OpenCL implementation provided by Clover. The diagram in Fig.6 shows which are the necessary components to set up the desired OpenCL environment and how they interact with each other.

Figure 6: Clover OpenCL implementation


The first step was packaging Clang for the host, as it is necessary to build libclc because this library is written in OpenCL C and some functions are directly implemented in LLVM IR. Clang will transform .cl and .ll source files into LLVM IR bitcode (.bc) by calling llvm-as (the LLVM assembler).

Regarding the Makefile for building host-clang, the path to host’s llvm-config must be specified and some manual configuration is needed because Clang is thought to be built as a tool inside LLVM’s tree (LLVM_SOURCE_TREE/tools/clang) but Buildroot manages packages individually, so Clang’s source code cannot be downloaded inside LLVM’s tree. Having Clang installed on the host is not only useful for building libclc, it also provides an alternative to GCC, which enables the possibility of creating a new toolchain based on it.

Clang for target

It is important to remark that this package will only install, not Clang driver. When Clang was built for the host, it generated multiple static libraries (libclangAST.a, libclangFrontend.a, libclangLex.a, etc.) and finally a shared object ( containing all of them. However, when building for the target, it produced multiple shared libraries and finally This resulted in the following error when trying to use software that links against libOpenCL, which statically links with libclang (e.g, clinfo):

$ CommandLine Error: Option ’track-memory’ registered more than once!
$ LLVM ERROR: inconsistency in registered CommandLine options

To avoid duplicated symbols: CLANG_CONF_OPTS += -DBUILD_SHARED_LIBS=OFF


This library provides an implementation of the library requirements of the OpenCL C programming language, as specified by the OpenCL 1.1 specification. It is designed to be portable and extensible, as it provides generic implementations of most library requirements, allowing targets to override them at the granularity of individual functions, using LLVM intrinsics for example. It currently supports AMDGCN, R600 and NVPTX targets.

There is a particular problem with libclc: when OpenCL programs call clBuildProgram function in order to compile and link a program (generally an OpenCL kernel) from source during execution, they require clc headers to be located in /usr/include/clc. This is not possible because Buildroot removes /usr/include from the target as the embedded platform is not intended to store development files, mainly because there is no compiler installed on it. But since OpenCL works with libLLVM to do code generation, clc headers must be stored somewhere.

The file that adds the path to libclc headers is invocation.cpp, located at src/gallium/state trackers/clover/llvm, inside Mesa’s source tree:

// Add libclc generic search path
// Add libclc include

It was decided to store these files in /usr/share, which can be specified in libclc’s Makefile by setting –includedir=/usr/share. Given that clc headers are being installed to a non-standard location, it is necessary to specify this path in Mesa’s Otherwise, pkg-config outputs the absolute path to these headers located in STAGING_DIR, which causes a runtime error when calling clBuildProgram:

    if test "x$have_libclc" = xno; then
        AC_MSG_ERROR([pkg-config cannot find libclc.pc which is required to build clover.
                    Make sure the directory containing libclc.pc is specified in your
                    PKG_CONFIG_PATH environment variable.
                    By default libclc.pc is installed to /usr/local/share/pkgconfig/])
        LIBCLC_LIBEXECDIR=`$PKG_CONFIG --variable=libexecdir libclc`

Verifying Clover installation with Clinfo

Clinfo is a simple command-line application that enumerates all possible (known) properties of the OpenCL platform and devices available on the system. It tries to output all possible information, including those provided by platform-specific extensions. The main purposes of Clinfo are:

• Verifying that the OpenCL environment is set up correctly. If clinfo cannot find any platform or devices (or fails to load the OpenCL dispatcher library), chances are high no other OpenCL application will run.
• Verifying that the OpenCL development environment is set up correctly: if clinfo fails to build, chances are high that no other OpenCL application will build.
• Reporting the actual properties of the available devices.

Once installed on the target, clinfo successfully found Clover and the devices available to work with, providing the following output:

Figure 7: clinfo

Testing Clover with Piglit

Piglit is a collection of automated tests for OpenGL and OpenCL implementations. The goal of this project is to help improving the quality of open source OpenGL and OpenCL drivers by providing developers with a simple means to perform regression tests. Once Clover was installed on the target system, it was decided to run Piglit in order to verify Mesa’s OpenCL implementation conformance, taking the packaging for Buildroot from Romain Naour’s series (

To run the OpenCL test suite, the following command must be executed:

piglit run tests/cl results/cl

The results are written in JSON format and can be converted to HTML by running:

piglit summary html --overwrite summary/cl results/cl

Figure 8: Piglit results

Most of the tests that failed can be classified in the following categories:

• Program build with optimization options for OpenCL C 1.0/1.1+
• Global atomic operations (add, and, or, max, etc.) using a return variable
• Floating point multiply-accumulate operations
• Some builtin shuffle operations
• Global memory
• Image read/write 2D
• Tail calls
• Vector load

Some failures are due to missing hardware support for particular operations, so it would be useful to run Piglit with a more recent GPU using RadeonSI Gallium driver in order to compare the results. It would also be interesting to test with both GPUs which packages can benefit from OpenCL support using Clover.

Conclusions and future work

Currently, LLVM 5.0.2, Clang 5.0.2 and LLVM support for Mesa 3D are available in Buildroot 2018.05. The update of these packages to version 6.0.0 has already been done and will be available in the next stable release.

Regarding future work, the most immediate goal is to get OpenCL support for AMD GPUs merged into Buildroot. The next step will be to add more packages that rely on LLVM/Clang and OpenCL. On the other hand, the fact of creating a toolchain based on LLVM/Clang is still being discussed on the mailing and is a topic that requires an agreement from the core developers of the project.


The complete report can be downloaded here: LLVM Clang integration into Buildroot. It shows the development of the project in detail and also contains a section dedicated to the VC4CL package, which enables OpenCL on the Broadcom Videocore IV present in all Raspberry Pi models.

Colloque IoT 2018 - Angers (49) Le 22 novembre 2018

800 professionnels réunis, show-room de 70 exposants de l'électronique et du numérique, Start-ups Village, 18 conférences et tutorials, etc...

Le COLLOQUE IoT ESEO est co-piloté avec We Network, CAP'TRONIC, Angers French Tech, Angers Loire Développement, la CCI, et de nombreux partenaires et intervenants entreprises.

Inscription en ligne

Date et heure
Jeudi 22 novembre 2018
08:30 – 19:00 heure


10 boulevard Jeanneteau
49000 Angers

Séminaire « De l'acquisition au traitement des données dans l'embarqué » - Orléans (45) Le 18 octobre 2018

CAP'TRONIC s'associe au CRESITT et vous invite à participer à ce séminaire dont l'objectif est de s'intéresser aux différentes technologies permettant de réaliser des mesures pour le domaine industriel ou professionnel : il s'agit bien sûr des capteurs, mais aussi des centrales d'acquisition permettant de stocker, réaliser un pré-traitement des données et des moyens de transmission de ces données.

A travers une approche technique, des solutions et applications existantes ou en développement seront présentées. Par exemple : mesure d'humidité du sol, météo, télé-relevé, systèmes d'accès, instrumentation, … Le traitement de ces données dans l'embarqué sera également abordé (Linux embarqué, bibliothèques de traitement du signal et d'images, algorithmes temps réel, …)

Le secteur industriel de la mesure et de l'instrumentation est bien représenté en Région Centre Val de Loire, ainsi que les laboratoires. Certains viendront présenter leurs produits et travaux dans ce domaines.


Polytech Orléans
12 rue de Blois
45000 Orléans

Allwinner VPU support in mainline Linux status update (week 27)

This week, significant time was dedicated to preparing a new revision of the Sunxi-Cedrus VPU kernel driver. This new version (that was started last week) based on version 15 of the media requests API brought about a number of challenges. First off, integrating the recently-tested VPU-side untiling of the destination buffers required a significant rewrite … Continue reading "Allwinner VPU support in mainline Linux status update (week 27)"

Hackable n°25 est arrivé chez votre marchand de journaux !

Voici venir le numéro d’été de Hackable.

Et en été, il fait beau, le ciel est bleu et peu de nuages cachent la jolie surface de notre planète lorsqu’on la regarde de l’espace…

Quel meilleur moment donc que celui-ci pour obtenir de telles images ? Tout ce dont vous avez besoin c’est un récepteur RTL-SDR coûtant une poignée d’euros, une antenne adaptée faite maison à partir de matériaux disponibles dans n’importe quel magasin de bricolage, et de bons logiciels.

L’une des images réceptionnées par la rédaction lors des expérimentations

La radio logicielle est un domaine presque sans limite qui, aujourd’hui, est à la portée de tous et de toutes les bourses. Le sujet principal de ce numéro 25 concernera donc la réception directe d’images satellite. Je ne vous parle pas d’obtenir ces images depuis le net ou un quelconque autre service distant, mais bel et bien en live depuis l’espace, lorsque l’un des trois satellites NOAA passe au-dessus de votre tête (ou pas trop loin).

Nous explorerons, dans ce dossier, les choses à savoir pour bien commencer, les premières manipulations permettant de valider le concept, les tenants et les aboutissants des transmissions analogiques APT, la fabrication d’une antenne adaptée à ce type de réception et, bien entendu l’enregistrement des signaux et le décodage des messages pour obtenir une image prise par un engin orbitant à quelques 800 km au-dessus de nous.

Note : Il semblerait que le site hébergeant l’un des logiciels utilisés pour le décodage des images APT/NOAA rencontre actuellement quelques difficultés (cf. Des utilisateurs du logiciel se sont mobilisés pour mettre à disposition des liens de téléchargement via Reddit. Si le site de WXtoImg n’est pas accessible au moment où vous lirez ceci, vous trouverez ici les archives pour Windows, Debian/Ubuntu et RPM/Red Hat :
(un utilisateur a également mis à disposition les versions Armhf sur Dropbox)

Note (17/07/2018) : Le site de WXtoImg est de retour… en quelque sorte. Un utilisateur de l’outil a recréé le site original : Celui-ci comprend les différentes versions téléchargeables ainsi que les clés permettant d’exploiter toutes les fonctions du programme.

Au sommaire de ce numéro :

  • Equipement
    • p.04 : Une station de soudage à air chaud à moins de 30€ ?
  • Ardu’n’co
    • p.12 : Vos ESP8266 se mettent à jour tout seuls !
  • En couverture
    • p.24 : Introduction à la réception d’images satellite
    • p.42 : Se créer une antenne pour recevoir les images satellites
    • p.52 : Réception de vos premières images satellite
  • Radio & Fréquences
    • p.64 : Mesurez la vitesse de la lumière dans les câbles !
  • Repère & Science
    • p.72 : La fabuleuse histoire des calculateurs numériques à l’ère électromécanique
  • Tensions & Courants
    • p.82 : Les capteurs photovoltaïques
  • Retro Tech
    • p.88 : SCSI2SD ou comment remplacer un disque SCSI par une carte microSD



Réceptionnez des images satellite !

Au menu cet été : la réception d’images satellite avec Hackable ! Vous apprendrez à construire votre antenne, à capter des signaux et obtenir des images. Ce numéro estival vous invitera également à utiliser une station de soudage à air chaud pour réparer ou modifier vos circuits modernes, à comprendre le fonctionnement des cellules photovoltaïques, à mesurer la vitesse de la lumière dans les câbles ou encore à découvrir la fabuleuse histoire des calculateurs numériques à l’ère électromécanique. Rendez-vous sans plus tarder chez votre marchand de journaux, sur notre boutique et sur notre plateforme de lecture en ligne Connect pour découvrir ce nouveau numéro !

Au sommaire


p. 04 Une station de soudage à air chaud à moins de 30€ ?


p. 12 Vos ESP8266 se mettent à jour tout seuls !

En couverture

p. 24 Introduction à la réception d’images satellite

p. 42 Se créer une antenne pour recevoir les images satellites

p. 52 Réception de vos premières images satellite

Radio & Fréquences

p. 64 Mesurez la vitesse de la lumière dans les câbles !

Repère & Science

p. 72 La fabuleuse histoire des calculateurs numériques à l’ère électromécanique

Tensions & Courants

p. 82 Les capteurs photovoltaïques

Retro Tech

p. 88 SCSI2SD ou comment remplacer un disque SCSI par une carte SD

CAP SUR L'INNOVATION : L'Intelligence Artificielle au service d'un monde réel. - Paris Le 17 octobre 2018

Centre Pierre Mendès France - Bercy.
Sous le parrainage du Ministère de l'Economie et des Finances

Inscription en ligne

CAP'TRONIC et la DGE créent une nouvelle fois l'événement, en réunissant TPE, PME et Start-Up autour de l'Innovation. Véritable journée d'échanges et de témoignages, ce rendez-vous annuel permet de donner aux participants une large vision des potentialités des technologies numériques connectées.

Cette année, l'événement se tiendra au Ministère de l'Economie et des Finances, et sera rythmé par une thématique centrale d'actualité : l'Intelligence Artificielle. Les interventions et tables rondes réuniront les experts du domaine, pour éclairer et donner une vision applicative du domaine.

Cette manifestation sera aussi l'occasion de découvrir les lauréats des 12èmes Trophées CAP'TRONIC, un condensé d'innovations dans 4 catégories sera réuni à cette occasion : santé et bien-être, industrie et services, produit à usage du grand public et jeune entreprise. Un 5ème trophée sera remis suite au vote des participants parmi les nominés le Jour J.


- 9h00 - Accueil

- 9h30 - Introduction : par Yves BOURDON, président de JESSICA France / CAP'TRONIC et la Direction Générale des Entreprises (DGE) du ministère de l'Economie et des Finances.

- 10h - Keynote : « L'Intelligence Artificielle : définition, usages et enjeux »

- 10h30 - Table ronde « Quelles applications industrielles pour l'IA ? » :

- 11h30 - Présentations flash des 13 projets nominés aux Trophées CAP'TRONIC et vote de la salle.

- 12h15 - Cocktail déjeunatoire (exposition des produits des PME et Startup nominées aux Trophées CAP'TRONIC).

- 14h00 - Remise des Trophées CAP'TRONIC.

- 14h45 - Table ronde « Intelligence Artificielle : Embarqué ou cloud ? »

- 15h45 – Intervention de Paul-François FOURNIER, Directeur Exécutif, Direction Innovation Bpifrance.

- 16h15 - Table ronde « Systèmes Cyber-Physiques : pour quels nouveaux services ? »
Les Systèmes Cyber-Physiques au cœur de la transformation numérique des produits et des services.

- 17h15 – Conclusion de la journée.

Lieu de l'événement :

Ministère de l'Economie et des Finances
Centre Pierre Mendès France
139 rue de Bercy
75012 Paris

Partenaires presse :

Formulaire d'inscription :

Séminaire ASPROM / CAP'TRONIC : Les Bioénergies – Energies nouvelles et renouvelables - Technologies, enjeux et applications - Paris Le 10 octobre 2018

La biomasse se définit comme « la fraction biodégradable » des produits, déchets et résidus provenant de l'agriculture, y compris les substances végétales et animales issues de la terre et de la mer, de la sylviculture et des industries connexes, ainsi que la fraction biodégradable des déchets industriels et ménagers. Toutes ces matières organiques peuvent devenir source d'énergie par combustion (ex : bois énergie), après méthanisation (biogaz) ou après de nouvelles transformations chimiques (bio-carburant).

Solution d'avenir, la biomasse constitue la 1ère source d'énergies renouvelables produites en France, devant l'énergie hydraulique, éolienne et géothermique.

Pour ce séminaire, nous avons demandé à quelques-uns des meilleurs experts français de présenter les derniers développements en matière de biomasse qui contribuent à la meilleure performance énergétique.

Programme du 10 octobre 2018

- 9h – 9h30 : Introduction au séminaire

- 9h30 – 10h15 : Combustion de la biomasse
Laboratoire d'Etude et de Recherche sur le MAtériau Bois Université de Lorraine, ENSTIB, Epinal

- 10h15 – 11h00 : La méthanisation : principes, applications, potentiel

- 11h00 – 11h30 : Pause-Café

- 11h30 – 12h15 : Conversion des carbohydrates en biocarburants drop-in par voie microbiologique
Par Bernard CHAUD, Directeur de la Stratégie Industrielle à GLOBALBIOENERGIES

- 12h15 – 13h : Pyrolyse, liquéfaction et gazéification de la biomasse
Anthony DUFOUR, Yann LE BRECH, Guillain MAUVIEL
Laboratoire Réactions et Génie des Procédés CNRS-Université de Lorraine, ENSIC, Nancy

- 13h – 14h30 : déjeuner

- 14h30 – 15h15 : Gazéification de biomasse pour la production de chaleur et/ou d'électricité
Etienne LEBAS – Directeur Scientifique de COGEBIO

- 15h15 – 16h00 : Gazéification de biomasse en lit fluidisé dense
Matthieu DEBAL, Pierre GIRODS, Yann ROGAUME
Laboratoire d'Etude et de Recherche sur le MAtériau Bois Université de Lorraine, ENSTIB, Epinal

- 16h30 – 17h15 : Les biocarburants de 2ème génération proches de l'industrialisation
Gilles FERSCHNEIDER, chef de projet à IFPEN

- 17h15 – 17h45 : Interventions de sponsors

Programme du 11 octobre 2018

- 9h00 – 10h30 : Intérêt des ressources agronomiques et forestières pour la bioénergie (à confirmer)

- 10h30 – 11h : Pause - Café

- 10h00 – 11h : Quelle place pour les biocarburants dans l'aviation ?

- 11h00 – 11h45 : Transport aérien responsable & futurs carburants aviation durables & renouvelables : perspectives et défis
Philippe MARCHAND, Refining & Chemicals, Strategy-Development-Research/Bio Division chez TPTAL

- 11h45 – 12h30 : Safran sur la piste des biocarburants (à confirmer)
Nicolas JEULAND, Expert en carburants du futur chez Safran

- 12h30 – 14h00 : Déjeuner

- 14h00 – 14h45 : H2 issu de bio-ressources
Dr. Louise JALOWIECKI-DUHAMEL, Chercheur CNRS, UCCS, Unité de Catalyse et Chimie du Solide

- 14h45 – 15h30 : Microalgues et biocarburants : potentiel et enjeux actuels

- 15h30 – 16h00 : Pause - Café

- 16h00 – 16h45 : Intervention d'ENGIE (à Confirmer)

- 16h45 – 17h30 : Un mix de gaz 100% renouvelable en 2050 ?
Alban THOMAS, Direction Stratégie Régulation, GRTGAZ

Participation aux frais :

Pour les grandes entreprises et investisseurs (VC) :
- 840 € TTC (TVA 20 % incluse), soit 700 € HT pour le séminaire complet
- 600 € TTC (TVA 20 % incluse), soit 500 € HT pour une journée au choix

Pour les PME (effectif < ou = 250 personnes) et universitaires (sur justificatif) :
- 360 € TTC (TVA 20 % incluse), soit 300 € HT pour le séminaire complet
- 240 € TTC (TVA 20 % incluse), soit 200 € HT pour une journée au choix

Pour les PME éligibles CAP'TRONIC : prise en charge d'une journée au choix – Inscription à une deuxième journée : 240,00 € TTC, soit 200 € HT.

Pour vous pré-inscrire, merci d'envoyer un mail à Céline GONCALVES - une confirmation d'inscription vous sera envoyée.
La prise en charge CAP'TRONIC est valable pour 1 personne par PME (pour les 10 premiers inscrits uniquement)

Contact :

Christophe BRICOUT -

Lieu de l'événement :

UIMM, 56 avenue de Wagam, 75017 PARIS

Séminaire ARISTOTE / CAP'TRONIC : Cyber sécurité et protection du patrimoine des entreprises. - Palaiseau (91) Le 1er octobre 2018

Dans le contexte de l'utilisation des objets connectés (IoT) dans l'entreprise, des réseaux sociaux et des smartphones, comment assurer la protection du patrimoine de l'entreprise par la bonne utilisation de ces objets et du cloud en considérant les aspects humains et technologiques ?

Programme à venir

Contact :

Samuel EVAIN : -

Lieu de l'événement :

CEA de Saclay - Site Nano-INNOV
Avenue de la Vauve, 91120 Palaiseau

Conférences Enova Paris : Lutte anti-contrefaçon - Paris Le 23 octobre 2018

Dans le cadre du salon Enova Paris, CAP'TRONIC et ARMIR vous invitent à des conférences et des ateliers sur le thème de la lutte anti-contrefaçon le mardi 23 octobre et le mercredi 24 octobre au parc des expositions de Paris - Porte de Versailles.

Programme prévisionnel :

Mardi 23 octobre 2018

- 10h-10h45 : Panorama général de la lutte anti-contrefaçon
D. Saussinan - UNIFAB

- 10h45-11h30 : (THID) : une technique nouvelle THz pour l'identification des produits
F. Garet – IMEP-LAHC

- 11h30 -12h15 : La thermographie infrarouge stimulée : un outil de lutte contre la contrefaçon
J.L.Bodnar - GRESPI

- 14h15-15h00 : Imagerie multispectrale IR/THz 2D et 3D dédiées au contrôle et à l'inspection de matériaux isolants opaque ou cachés. Application à la contrefaçon
J.P. Caumes – Nhetis

- 15h00-15h45 : Analyse de contrefaçon pharmaceutiques par technologie THz
B. Fischer – ISL

Mercredi 24 octobre 2018

- 10h30-11h15 : Vers la détection de contrefaçons par contrôle THz impulsionnel ultrarapide : imagerie couplée à la caractérisation de matériaux
Uli Schmidhammer - Teratonics

- 11h15- 12h00 : Machine learning : une nouvelle voie dans la lutte anti- contrefaçon ?

Y. Chaouche, consultant

Contact :

Michel Marceau - 01 69 08 24 90

Lieu de l'événement :

Paris expo Porte de Versailles

Le guide pour administrer votre réseau est de retour en kiosque !

Ce numéro spécial dédié à la création de votre réseau local est à nouveau disponible en kiosque. Il vous fournira pour rappel un guide complet pour construire et administrer votre réseau sous Linux. Après une introduction générale au réseau, 4 grands aspects seront traités parmi lesquels l’architecture, le Wifi, les services et enfin, la sécurité. En plus du kiosque, rendez-vous sur notre boutique et sur notre plateforme de lecture en ligne Connect pour découvrir ce hors-série ! 

Au sommaire

Introduction au réseau : Familiarisez-vous avec les mécanismes du réseau et ses outils
p.12 Comprenons le réseau
p.24 Installons nos systèmes d’exploitation

Architecture : Dressez le plan de votre réseau, mettez en place routage et firewall et installez un serveur DHCP et DNS
p.32 Mettons en place les nœuds du réseau : routage et firewall
p.46 Distribuons les paramètres réseau : DHCP et DNS

Wifi : Installez des points d’accès Wifi pour permettre la connectivité sans fil
p.56 Mettons en place un point d’accès Wifi
p.68 Utilisons plusieurs réseaux Wifi

Services : Gérez les utilisateurs de votre réseau et mettez-leur à disposition un partage de fichiers et d’impression
p.78 Gérons les utilisateurs de notre réseau
p.90 Partageons nos fichiers
p.96 Faisons bonne impression

Sécurité : Créez un VPN et mettez en place une connexion sécurisée avec SSH
p.106 Établissons un réseau privé virtuel
p.116 Connectons-nous dans tous les sens avec SSH


Un environnement exécutif visant la compatibilité POSIX : NuttX pour contrôler un analyseur de réseau à base de STM32

Un environnement exécutif visant la compatibilité POSIX et exploitant des pilotes fortement inspirés de l’architecture de Linux est proposé pour microcontrôleur STM32 au travers de NuttX. Nous démontrons son portage à une nouvelle plateforme en implémentant un analyseur de réseau radiofréquence, tirant le meilleur parti des fonctionnalités fournies pour un déploiement rapide en utilisant les divers pilotes mis à disposition : ADC, SPI, PWM et GPIO. L’applicatif se résume dans ce contexte à une séquence d’appels systèmes à ces pilotes.

Au sommaire de l’article

1 Matériel

2 Ajout du support d’une variante de STM32

3 Ajout d’une nouvelle plateforme

4 Mon premier pilote

5 Utilisation des pilotes fournis par NuttX

5.1 Utilisation du timer du STM32

5.2 Utilisation d’un composant sur bus SPI du STM32

5.3 Utilisation d’un ADC du STM32

5.4 Utilisation de deux ADC du STM32

6 Une nouvelle application faisant appel aux pilotes

7 Un OS sur microcontrôleur … aspect utilisateur : pthreads



G. Goavec-Mérou & J.-M. Friedt

 > Lire l’intégralité de cet article sur notre plateforme de lecture en ligne Connect  

Retrouvez cet article (et bien d’autres) dans GNU/Linux Magazine n°210, disponible sur la boutique et sur Connect !

À nouveau disponible en kiosque : notre hors-série spécial « Recherche de vulnérabilités » !

Bonne nouvelle si vous l’avez manqué, notre numéro spécial dédié à la recherche de vulnérabilités est à nouveau disponible chez votre marchand de journaux ! Pour rappel, ce hors-série vous permettra de découvrir une sélection de nouveaux outils pour auditer vos applications, à travers 4 grands domaines : l’analyse statique et dynamique, la cryptographie, l’IoT et le Web. Ce guide est également disponible sur notre boutique ainsi que sur notre plateforme de lecture en ligne Connect.  

Au sommaire

Analyse statique & dynamique

p. 10 Analyse concolique de binaires avec angr

p. 22 Introduction au développement de plugins pour Radare2

p. 44 Usages avancés d’AFL


p. 66 Testons votre crypto !

p. 76 Algorithmes et implémentations cryptographiques vulnérables : détection avec grap


p. 92 Toolkit 4.0 : revue des outils pour l’exploitation pour l’IoT


p. 110 Recherche passive de vulnérabilités web avec Snuffleupagus

p. 116 Wapiti : chasser les vulnérabilités web

Découvrez la préface du hors-série spécial développement sécurisé !

Des virus et ransomware circulent de par le monde, ce n’est un secret pour personne. En 2017, deux cyberattaques ont été mises en avant : WannaCry en mai et NotPetya en juin. Adylkuzz qui opérait durant la même période n’a par exemple pas ou peu été cité. Nous sommes ici en présence de trois cyberattaques majeures parmi des dizaines d’autres, mais ce qu’il est intéressant de rappeler, c’est que ces attaques utilisent toutes une faille et que cette faille est le plus souvent le résultat de négligences de développeurs. On se souvient par exemple de la faille goto fail chez Apple en 2014. Si ça ce n’est pas une erreur de développeur…


if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)

    goto fail;

if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)

    goto fail;

    goto fail;

if ((err =, &hashOut)) != 0)

    goto fail;


Ce qui revient à exécuter goto fail quel que soit le résultat du test du second if.

En respectant simplement des règles de codage strictes, l’erreur aurait pu apparaître un peu plus clairement au moment des copier/coller du développeur étourdi :


if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0) {

    goto fail;


goto fail;


D’où l’intérêt d’être un minimum sensibilisé aux différents types d’attaques possibles, comment essayer de s’en prémunir et quels outils employer pour tester les programmes et colmater les brèches en amont. Il est en effet beaucoup moins chronophage de rechercher les failles en amont et de les corriger plutôt que de risquer une attaque qui surviendra nécessairement au plus mauvais moment, qu’il faudra corriger dans l’urgence et qui aura nécessairement un impact négatif sur l’image de votre logiciel/société.

Nous verrons ainsi :

■ comment réaliser un audit d’intrusion ;

■ quelles sont les attaques les plus fréquentes contre les applications web ;

■ comment intégrer l’ensemble des aspects sécurité dans le cycle de développement logiciel ;

■ quelles options employer avec gcc pour produire du code de meilleure qualité et plus sécurisé ;

■ et bien d’autres choses encore qui devraient vous être fort utiles…

Pour finir, si vous tenez entre vos mains la version papier de ce magazine, vous aurez nécessairement remarqué un changement de forme pour ce premier hors-série nouvelle formule : finis les mooks et place à un magazine plus varié qui fera bien entendu la part belle au dossier principal, mais qui vous permettra également de vous informer sur des sujets connexes. Je vous laisse le découvrir sans plus attendre et vous souhaite une bonne lecture !

Tristan Colombo

Retrouvez GNU/Linux Magazine Hors-série n°97 :


L’édito de GNU/Linux Magazine n°217 !

En tant que magazine de développement informatique, nous ne pouvions pas ne pas nous poser la question : quid de l’ « apprentissage du code », pour utiliser la formule consacrée ? Que ce soient les médias ou les politiques, on ne cesse de nous rappeler les enjeux majeurs du développement informatique et de l’intelligence artificielle. Mais pour être compétitifs, il faut que nos enfants, les futurs informaticiens et chercheurs de demain soient correctement formés. Qu’en est-il donc ?

C’est la question que je me pose pratiquement chaque jour depuis que mes enfants sont en âge d’aller à l’école. Ce qui suit n’est en rien une généralité, mais le fruit d’une observation ponctuelle qui n’est certainement pas isolée. En maternelle, l’école était équipée de vieux PC donnés par des parents et de quelques Mac : inutilisables de par la disparité du matériel et l’absence de connaissances techniques de l’équipe enseignante. Après mon passage, quelques ordinateurs étaient utilisés sous DouDouLinux en grande section. C’était il y a 5 ans… depuis les ordinateurs ne sont plus du tout utilisés et je m’aperçois en écrivant cet édito que le site de DouDouLinux n’est plus accessible : simple erreur du serveur web ou arrêt du projet ? On peut certes utiliser une Debian, rendre le bureau plus accessible et installer GCompris, mais DouDouLinux était déjà paramétré pour une utilisation par des enfants. Dans ce cadre, PrimTux est sans doute une alternative intéressante.

Dans l’école primaire de mon fils, il y a cette fois une véritable salle informatique… où les ordinateurs sont tous sous Windows et ne sont pas maintenus. Pour apprendre aux enfants à « programmer » (créer des animations avec Scratch), les enfants se déplacent donc et effectuent quelque trois heures de trajet aller/retour pour une heure de temps passée devant un ordinateur. Tout cela pour accéder à des ordinateurs fonctionnels, mais également à des formateurs ayant acquis quelques rudiments de programmation, les enseignants n’ayant pas eu la chance d’avoir accès à cette connaissance…

Nos voisins anglais ont été un peu plus rapides que nous à comprendre les enjeux de l’apprentissage de la programmation et l’utilisation de la BBC Micro:bit, petite carte permettant de s’initier aux joies du développement de manière plus large que Scratch (JavaScript avec un éditeur block ou normal et Python) en est l’exemple flagrant, même si cette dernière n’est employée qu’à partir de l’équivalent de notre 6ème.

Microsoft et Apple ont bien senti l’importance économique de ces futurs consommateurs. Le premier en s’impliquant dans le projet BBC Micro:bit en développant un éditeur, en rachetant Minecraft, en proposant le site pour apprendre à programmer – à la sauce Windows bien sûr, tout en mode graphique-, et en diffusant des offres de réduction pour les étudiants et enseignants : une fois que les gens sont habitués à utiliser du Microsoft, faire machine arrière est beaucoup plus compliqué. Pour le second, en plus des promotions pour étudiants et enseignants qui ne visent qu’à ancrer les futurs utilisateurs sur du matériel de la marque, Apple proposait également des sorties de classe pour bien formater les futurs acheteurs (jusqu’à leur suspension par le ministre de l’Éducation nationale il y a peu). Édifiant !

Ainsi, alors que certaines administrations françaises ont réussi leur passage au logiciel libre, l’éducation nationale est la cible d’entreprises qui visent à enfermer les enfants dans leur vision propriétaire de l’informatique et du développement : on utilise Windows/Apple comme système d’exploitation, on développe de manière purement graphique… on devient de futurs clients !

Les grandes marques ne cherchent pas vraiment à former les élèves, ce n’est ni leur rôle ni leur intérêt. L’histogramme Current Age vs . Age started coding proposé sur et basé sur 39441 développeurs montre que parmi les développeurs, les 35 – 54 ans sont ceux qui ont appris à programmer le plus tôt et en plus grand nombre. La curiosité scientifique aurait-elle fuit les plus jeunes ?

Je vous souhaite une bonne lecture, et n’hésitez pas à laisser traîner vos GNU/Linux Magazine sur les lieux de passage d’enfants, on ne sait jamais, ils pourraient voir qu’une autre informatique est possible…

Tristan Colombo

Retrouvez GNU/Linux Magazine n°217 :